How to measure temporal complexity of sorting algorithms?

2

Given a sort algorithm, eg burble sort, how to measure its runtime?

For example, let's say that the input of the array or list has a length of 1000 elements, what is the time measured in "ms" that I should "wait", given that the average time of this algorithm is 0 (n ^ 2)?

How to calculate the time x given the calculation formula O (n ^ 2) and y measured time by executing the algorithm, given the above figure?

    
asked by anonymous 30.09.2014 / 04:57

1 answer

4

You are trying to make a correlation between theory and practice and as we know in theory both are equal, in practice not.

Theory : Using O () notation you study the behavior of the algorithm, i.e. how would the cost * depend on the size of the input. Note that the O () notation will not give us a function with well-defined parameters (you can not guess the "K's"), it will only tell us if it is polynomial, exponential, and so on. Note that cost * is not in time but in "operations" and even these operations depend on a definition in your problem. In general for sorting algorithms the operations are considered the swaps of items.

Remember that in these algorithms it is common to have a better case is worse case (ex list is sorted, list is sorted in descending order) and even that may affect your tests.

Practice : To check how long a program takes to run will depend on various hardware factors: CPU, Memory, etc. system factors: Which operating system, etc. technology factors: Which language / compiler used, etc. factors etc: load other systems being used as anti-virus, firewall, etc.

To verify this you will put your program to run and measure how long it took to finish processing a specific mass of data. If you plot a graph with these results for masses of data of different sizes in thesis the graph will + - adhere to the expected graph in theory (and you might even risk finding the constants, the "K" s). Of course, a number of factors will contribute to this not being as simple as using lots of memory and the OS starting to use paging. In addition your program will perform various operations, even OS call operations and other details that are not provided in the model.

Summary : There is no direct correlation between the theoretical cost of an algorithm calculated with big O and the time in ms of each operation. You may even try to find a correlation for a particular scenario but only under "laboratory conditions" and for a very specific environment.

    
28.10.2014 / 14:12