According to what you have described, there will be hundreds of concurrent connections. Each connection increments a value and when it reaches 5, returns to value 1.
With txt file this can be a problem because it will have to create conditions to prevent inconsistencies.
It is logical to lock the edit or access to the file if it is already open by some user:
$f = fopen('fit.txt', 'a');
if(flock($f, LOCK_EX | LOCK_NB)) {
$n = fread($fp, 4);
($n == 5)? $n = 1: $n++;
fwrite($f, $n);
flock($f, LOCK_UN);
}
fclose($f);
By using a database this operation is more secure, however, it is obvious that it will have a much higher cost of processes.
Before thinking about performance, think about consistency. If the routine is secure and you are sure there will be no flaw, you go to the "next stage" that is the optimization.
In this example above with flock()
, the process is "super fast" but some failure may still occur. Something inexperienced where it takes too long to release to the next user.
Then imagine a scenario where 200 users logged in at the exact same time.
The first will be the "lucky". Go read and write the number and release to the second, third, fourth. But what is last in the queue will be able to read and write the value correctly or will it return some error for long waiting time?
Consider that if the system has hundreds of simultaneous accesses, let's say that in a single second it receives 150 connections and then 2 seconds plus 200 and then 2 seconds plus 100. Only then do you already have, in 5 seconds, 350 in the waiting queue to read and write in this txt. The system can interrupt execution around the number 200 due to long wait.
It may be the case to rethink the business logic.
If you do not have this large number of simultaneous connections, then the simple flock()
, as in the example, can solve and still be a more viable option than a database in terms of performance.