How can I measure the runtime of a function in Python?
In C #, I use the class Stopwatch()
in this way
var sw = new Stopwatch();
sw.Start();
AlgumaFuncao();
sw.Stop();
WriteLine(sw.ElapsedTicks);
How can I measure the runtime of a function in Python?
In C #, I use the class Stopwatch()
in this way
var sw = new Stopwatch();
sw.Start();
AlgumaFuncao();
sw.Stop();
WriteLine(sw.ElapsedTicks);
You can use time.time()
:
import time
inicio = time.time()
funcao()
fim = time.time()
print(fim - inicio)
Basically you should take the start and end times of the test and then check the difference.
There are several options for taking time out of Python, such as the time
/ a>, which takes both real values (system time) and relative values (counters between one call and another of the function).
However from Python 2.3 there is the module timeit
, which according to the documentation :
This module provides a simple way to time small bits of Python code. It has both the Command-Line Interface as well as a callable one. It avoids the number of common traps for measuring execution times.
Free translation:
This module provides a simple way to measure small snippets of Python code. It has both an interface via command line and a redeemable. It avoids a number of common trick-or-treats in measuring runtimes.
The function to use for this module would be the default_timer (), which according to the documentation:
Defines the default timer, in a platform-specific manner. On Windows, time.clock () has microsecond granularity, but time.time () 's granularity is 1 / 60th of a second. On Unix, time.clock () has 1 / 100th of a second granularity, and time.time () is much more precise. On either platform, default_timer () measures wall clock time, not the CPU time. This means that other processes running on the same computer may interfere with the timing.
Free translation:
Defines a default, platform-specific timer. In Windows, time.clock () has microseconds granularity, but the granularity of time.time () is 1/60 seconds. On Unix, time.clock () has a granularity of 1/100 seconds, and time.time () is much more accurate. On any platform, default_timer () measures the real time, not the CPU time. This means that other processes running on the same computer may interfere with this time measurement.
Until Python 2, timeit.default_timer()
chooses what works best for your operating system from time.time()
to time.clock()
, in Python 3 it always chooses time.perf_counter()
. Therefore, timeit.default_timer()
is the most robust option among Python versions and operating systems.
Example of how the measurement would look using this module:
import time
import timeit
def alguma_funcao():
for i in range(1, 5):
time.sleep(1)
inicio = timeit.default_timer()
alguma_funcao()
fim = timeit.default_timer()
print ('duracao: %f' % (fim - inicio))
An example of a possible performance result above:
duration: 4.002503
I believe that the timeit function of the timeit module is simpler and more robust to measure the execution time of some code in Python. With it, you do not have to calculate the execution delta because that is exactly what it returns. In addition, you can control how many runs of the code you are going to make, which allows you to calculate an average runtime.
Here is an example of three functions that generate a list of 10 integers:
def f1():
return list(range(10))
def f2():
return [x for x in range(10)]
def f3():
lst = []
i = 0
while i < 10:
lst.append(i)
i += 1
return lst
The calculation:
from timeit import timeit
v1 = timeit('f1()', 'from __main__ import f1', number=100)
v2 = timeit('f2()', 'from __main__ import f2', number=100)
v3 = timeit('f3()', 'from __main__ import f3', number=100)
print(v1, v2, v3)