You are on the right path of trying not to recalculate a recursive function several times when its result does not change, but the easiest and most efficient way is not to use dictionaries; is to use lru_cache
.
lru_cache
is a decorator that automatically saves the result of its function in a cache. I mean, the first time you call a function, as it has not yet saved the result, it has to calculate it. Then it returns the result and saves this result in the cache, an internal memory. From the second time you call the function with the same arguments, as it already has the result saved, it returns you the result immediately without recalculating the function.
The result of using the cache is quite dramatic. See the comparison of doing calculations from fib(0)
to fib(40)
, on my machine, with exactly the same function and only by setting or taking lru_cache
:
from functools import lru_cache
import time
@lru_cache(maxsize=64)
def fib_com_cache(n):
if n < 2:
return n
return fib_com_cache(n-1) + fib_com_cache(n-2)
def fib_sem_cache(n):
if n < 2:
return n
return fib_sem_cache(n-1) + fib_sem_cache(n-2)
def fib_ate_40(funcao):
t1 = time.perf_counter()
for j in range(40):
funcao(j)
return time.perf_counter() - t1
# Primeiro fazemos fib até 40 sem cache
print('Tempo de execução sem cache: {:.6f} segundos'.format(fib_ate_40(fib_sem_cache)))
# Em seguida, fazemos com cache
print('Tempo de execução com cache: {:.6f} segundos'.format(fib_ate_40(fib_com_cache)))
Result:
Tempo de execução sem cache: 71.009936 segundos
Tempo de execução com cache: 0.000053 segundos
Just add the decorator @lru_cache
to gain a tremendous performance boost. This is useful for all functions where the result, when calling the function with the same arguments, does not change (that is, it does not depend on external state) and when it is called many times with the same arguments, and / or when the function calculation is heavy.
Saving results consumes memory, you can specify how many results you want to keep in memory at most (it is the maxsize
parameter). When the number of stored values passes from the given maxsize
(default 128), then the least recently used is discarded (hence the LRU: is Least Recently Used).
If your function depends on an external state that rarely changes or you for some other reason need to delete those stored values, just call cache_clear
:
fib_com_cache.cache_clear()