After a lot of research, a lot of documentation on the additional modules, and searches on other Stackoverflows, I got my answer.
In Estrutura de Dados
we make Análise Assintótica de Algoritmos
, in case the best case O(1)
and other cases are verified.
In fact, this is the best way to look at the behavior of an algorithm and see how it behaves based on the number of comparisons it makes.
This is the human way to do Algorithm comparisons.
Now there are Estruturas de Dados
that are or close to O (1), the Hash
structure in particular, because it uses a unique key instead of scouring an entire list. Either it exists or it does not exist, there is no doubt.
In Python we have the Hash Structure based on two types: Dicionários e Conjuntos
Any query made on these structures will be O (1), and this makes the Algorithm very fast.
In fact, the algorithm becomes so fast that its execution is not even in microseconds (I tested it in code and this was confirmed).
Now the clutter begins: Most operating systems can only guarantee accuracy of 1 second , since there is a risk of even absurd values (the value of a measured time after being less than the time value measured before). The machine can not express nanoseconds, which makes the measurement of fast Algorithms somewhat impossible. The timing of an Algorithm will depend on what is happening to the machine at that time.
So sadly, the best way to measure Algorithm execution time is through asymptotic analysis.
Follow the code I tried in Nanoseconds.
from datetime import datetime
dt = datetime.now()
antes = dt.microsecond
def firstDuplicate(a):
dic={}
for x in a:
if(x in dic):
return x
dic[x]=1
return -1
firstDuplicate([1,2,2])
print(firstDuplicate([1,2,2,5,6,7,8,9,10,11,12,13,14,15,16,17]))
ps = datetime.now()
depois = ps.microsecond
print("Microsegundos:")
print(depois-antes)