How to compare relative execution time of fast algorithms?


@Sack asked the following performance questions:

So far, he has got two answers to the first question (@Maniero's answer and @Isac's answer, which has been deleted) is one for the second question (my answer).

For @Maniero and @Isac responses, I observed that they were measuring, in addition to the execution time of the algorithm itself, also the execution and maintenance time of the loops. For my answer, I know there is a problem in the measurement due to the lambda call.

In normal situations, where the time of the algorithm being tested t_i is much greater than the time of the maintenance of the l_i ( t_i >> l_i ) loop, the impact of l_i tends to be negligible. But in the case in question, the times are possibly comparable l_i ~= t_i .


asked by anonymous 27.11.2017 / 13:35

1 answer


The only way to eliminate the time consumed by the loop is by deleting the loop itself and executing the desired code. The problem with this is that as the tested algorithm is usually so fast, it is difficult to get the execution time accurately.

As the performance test is normally intended to compare different algorithms and, in both tests, a loop is used, its interference becomes negligible. There are several other things that interfere much more in the final result than looping.

Testing the performance of small code snippets is extremely complicated, as several factors interfere with the end result. Both JVM, OS, and hardware can do optimizations that are only possible when the code snippet is tested in isolation, but not when that snippet is part of a larger system. Other software being used in the same environment at the time of testing may also interfere with performance.

In this response from one of the links you put, for example, because the resultado variable is not used for nothing, is only being incremented, it is quite possible that the JVM would define this as dead code, make optimizations and not execute that part of the code.

Another problem that must be taken into account is the non-heating of the JVM (JVM Warmup). Because the JVM loads some classes lazy, loading of certain classes could occur within the interval where the time measurement occurs, interfering with the final result.

In addition, the code does not necessarily run in the order it is written. Therefore, there is no guarantee that the execution of the algorithm will occur between time measurements.

To escape these and other pitfalls in microbenchmarking, you need a thorough understanding of the JVM implementation where code is run. Fortunately, there are tools that take these and other issues into account and eliminate or at least significantly reduce such impacts on the bottom line.

Two of them are:

27.11.2017 / 15:40