This can have several reasons, including the form itself that organized the database or even the hardware. Running a test, on a table containing 239289977 of rows and more than 46GB, SELECT * FROM tabela ORDER BY RAND() LIMIT 1
took exactly 00:02:07
. It's a big time, obviously, but the number of registrations is much higher than 2000, with that amount I believe the time would be as fast as a% conventional.
SELECT
can contribute to slowness, since it creates a temporary table. Temporary tables are saved to disk (SSD / HDD) and this will destroy any performance. (I do not know if this has been modified in recent versions of MySQL!) . You can force MySQL to use RAM, but as long as you have enough RAM, you can use:
SELECT SQL_SMALL_RESULT WHERE vip = 0 ORDER BY RAND() LIMIT 1
This is also limited by RAND()
, set in my.cnf. In general, this will not change the situation much.
The indices will be ignored in the temporary table, that is, the only index that can be imported is tmp_table_size
, only.
A better solution requires you to have sequential data, all 2000 values should be from 0 to 1999, with no gap and all of them are selectable (which is not the case).
So that's enough to do:
$maxsql = mysqli_query($conn, 'SELECT id FROM tabela ORDER BY id DESC LIMIT 1');
list($max) = mysqli_fetch_row($maxsql);
$random = random_int(0, $max);
$sql = 'SELECT * FROM tabela WHERE vip = 0 AND id >= ' . random;
//...
However, in your case you have both vip
and vip = 0
, which makes it unfeasible, because it becomes non-uniform. Note that vip <1
is id=999
, it can not be selected in the above situation, this will cause vip=-1
to be selected with both id=1000
and $random=999
.
This can be solved by creating two distinct tables, a $random=1000
and a tabela_normal
.
In addition, in the end you can also pre-compute data that is costly. If you have tabela_vip
, but run this many times, it would be better to generate a LIMIT 1
and store it once. The cost of a LIMIT 50
is much lower, in your case, than the cost of SELECT
, so reading a previously generated data is much faster. This would drastically reduce the number of requests. Again, this is in the architecture itself, how you built it.