What is faster to read and edited, a database or a .txt?

8

It would be to store and rewrite only 1 INT digit, deleting one and typing another, on condition that when it reaches 5, go back pro 1, and go doing that loop every time someone accesses.

Whenever someone accesses the page, the value would be plus +1 and saved, except that it already had 5, then it would return to 1.

Anyway. PHP + MySQL, or PHP + .txt, which is faster for this purpose?

Considering that if you make hundreds of concurrent requests the system, which is better?

    
asked by anonymous 28.11.2016 / 04:38

2 answers

8

A flat text file will always be faster, it does not have to do almost anything. A database performs a monumental set of things to ensure data integrity and do it in an easy and standardized way.

That does not mean it's the best option. If you are accessing the file concurrently you have to know what you are doing, otherwise you will experience problems even in a simple pattern like the one reported.

What does not create bugs in the system is to master the whole process of software development, deployment and maintenance of the infrastructure. This is not only to know if it is better to use MySQL or .txt . But if you do not know which is best, go in the safest which is the database. At least it requires less care.

An intermediate alternative would be using SQLite that has the characteristics of the database with some access facilities to file, including better performance.

    
28.11.2016 / 05:11
4

According to what you have described, there will be hundreds of concurrent connections. Each connection increments a value and when it reaches 5, returns to value 1.

With txt file this can be a problem because it will have to create conditions to prevent inconsistencies.

It is logical to lock the edit or access to the file if it is already open by some user:

$f = fopen('fit.txt', 'a');
if(flock($f, LOCK_EX | LOCK_NB)) {
    $n = fread($fp, 4);
    ($n == 5)? $n = 1: $n++;
    fwrite($f, $n);
    flock($f, LOCK_UN);
}
fclose($f);

By using a database this operation is more secure, however, it is obvious that it will have a much higher cost of processes.

Before thinking about performance, think about consistency. If the routine is secure and you are sure there will be no flaw, you go to the "next stage" that is the optimization.

In this example above with flock() , the process is "super fast" but some failure may still occur. Something inexperienced where it takes too long to release to the next user.

Then imagine a scenario where 200 users logged in at the exact same time.

The first will be the "lucky". Go read and write the number and release to the second, third, fourth. But what is last in the queue will be able to read and write the value correctly or will it return some error for long waiting time?

Consider that if the system has hundreds of simultaneous accesses, let's say that in a single second it receives 150 connections and then 2 seconds plus 200 and then 2 seconds plus 100. Only then do you already have, in 5 seconds, 350 in the waiting queue to read and write in this txt. The system can interrupt execution around the number 200 due to long wait.

It may be the case to rethink the business logic.

If you do not have this large number of simultaneous connections, then the simple flock() , as in the example, can solve and still be a more viable option than a database in terms of performance.

    
28.11.2016 / 08:01