Doubt in Complexity of Algorithms

1

I have a question in Complexity of Algorithms. As you can see, in the image below, the expressions are equivalent, that is, they have the same qualities and the same dominant term (n²), so why are the results different?

    
asked by anonymous 17.03.2017 / 16:43

2 answers

1

In fact, it is not completely wrong. Everything I'm going to write here is taken from the book and videotapes by Cormen et al. (2002).

To begin with, the Big-O (or O-large) notation serves only for a "upper asymptotic limit". It is important to know this, because the notation Theta (Θ) refers to asymptotic boundaries a function above and below (that is, a "firm" asymptotic boundary, or "fair" in some translations).

When you normally write that f(n) = O(g(n)) , if you simply say that some constant constant of g(n) is an upper asymptotic limit on f(n) . The detail is that in the big-O notation there is no mention of how much the upper limit is restricted.

That is, the notation says that there is a f(n) function that is O(n<sup>2</sup>) such that for any value of n , no matter what size you choose, the execution time on that entry has a certain upper limit by the value of f(n) .

In theory, therefore, a function f = n 2 -1 also has an unrestricted upper limit of O n 3 ). After all, for every positive n, as n 3 is mono-increasing when n> 0, there will be positive constants cen 0 , such that

Take the test!

However, big-O notation is not used in this way. Although there is (and indeed represents) a "set of functions" that make the notation true (basically, in this case, any function that has an upper limit above the function f by a factor c, polynomial) always the closest "asymptotic" boundary.

Maybe it was confusing, but from 2:51 of this video: link , Erik Demaine explains it right the mathematical "abuses" of the big-O notation (and the fact that the equals sign is asymmetrical is one of them!) demonstrating how these limits work

In fact, it uses the following example: 2n 2 = O (n 3 ) and explains that this actually means that 2n 2 ∈ O (n 3 ), that is, it is part of the "set" of functions asymptotically limited by O (n 3 ). Do you understand?

That is, you can YES, assert that g (n) = 3n 3 + 2n 2 + n is O (n 4 ), but this statement is much more "weak" or imprecise (or rather, "asymptotically inaccurate") than to say that g (n) is O (n 3 ), you understand? Since to prove, by the definition that g (n) above is O (n 3 ) it is enough to prove that it is 0.

I hope to help you understand this, at least a little bit.

Any questions, go in the comments!

    
02.06.2017 / 20:19
0

There's absolutely nothing wrong with it.

If f ∈ O(g) and g ∈ O(h) , then f ∈ O(h) .

That is, every function f(n) belongs to O(g(n)) if for all n is true that g(n) >= f(n) .

Of course, for every% of% it is true that n , then n^3 >= n^2 .

    
05.11.2018 / 13:25