Before simply giving the result of the complexity of your example we will first understand how it works and how we should calculate it. So let's go!
To calculate the complexity of a method or algorithm efficiently we will use the Big O notation, or, asymptotic complexity, which by definition is:
"Let f (n) and g (n) be functions mapping integer numbers
entry) in reais (runtime). We say that f (n) is O (g (n)) if
there is a real constant c > 0 and an integer constant n0 ≥ 1 such
that f (n) ≤ cg (n) for every integer n ≥ n0. "
Within the asymptotic complexity we have several types of function. Below they are sorted from the best (constant) to the worst (exponential):
- Constant → 1
- Logarithmic → log n
- Linear → n
- n-log-n → n log n
- Quadratic → n²
- Cubic → n³
- Polynomial → n ^ k
- Exponential → a ^ n
In the image you can see the growth rates of the functions:
Comments:
- Itisveryimportanttounderstandthemaintypesofcomplexityfunctionbecauseitisontheirbasisthatwewilldefinethecomplexityofouralgorithms.Butthat'senough,andlet'sgetdowntobusiness.
- ThenotationOdeterminesthatafunctionf(n)is"less than or equal to" another
function g (n), discounting a constant factor, as n grows
to infinity.
- An algorithm A with complexity O (n²) will never have a runtime
greater than n², for a given entry n, even in the worst case.
Rules for defining asymptotic complexity:
Polynomial function: always consider the highest degree.
- 5n ^ 4 + 3n³ + 2n² + 4n + 1 is an O (n ^ 4).
Constants and multipliers are eliminated.
- 4n³ is O (n³) - > I do not care about the constant, the long term will be
unrelenting.
Mixed function: always consider the term of greater complexity.
- 5n² + 3n log n + 2n + 5 is O (n²). In this case the complexity of a n² is much greater than that of a log n, so we eliminate it.
Always consider simpler representation.
- 4n² + 2 log n is O (n²), which is undoubtedly better than O (n² + log n).
Understanding the rules we can see some examples (including yours).
Example 1
public static int sumNumbers(int n1, int n2) {
int result = n1 + n2;
return result;
}
For example 1 we have constant operations throughout the method, so it has constant complexity O (1).
Example 2:
// Retorna true se não existe elemento duplicado no vetor.
public static boolean unique1(int[ ] data) {
int n = data.length;
for (int j = 0; j < n - 1; j++)
for (int k = j + 1; k < n; k++)
if (data[j] == data[k])
return false;
return true;
}
For example 2 we have a complexity of O (n²).
Note: A hint, whenever you have a for
within your method
it will run n times, so we would already have an O (n) complexity.
If we have a for
inside another, we have a complexity of O (n²),
because we are running n * n. However, always watch for details,
this is not guaranteed, each method is a case.
For your exercise we will have an O (n), with a for
sec running n times internal operations with constant complexity.
Note: If you are in doubt about the operation Math.pow know that it has complexity O (1).
To build this answer I relied on Goodrich and Tamassia's book, Data Structure & Algorithms in Java. I recommend reading.