If I have, for example, some sort algorithms (Merge sort, Quick sort, Bubble sort ...) in which way (s) can I know the efficiency of each?
If I have, for example, some sort algorithms (Merge sort, Quick sort, Bubble sort ...) in which way (s) can I know the efficiency of each?
First, an algorithm is not "efficient" or "inefficient", it only has a number of steps proportional to the size of its inputs. The implementation of this algorithm is that it will have an efficiency, be it in memory, in time, in energy consumption, etc. What you can do is evaluate the algorithm under several metrics and then try to correlate these metrics with the corresponding computational cost (eg, the more steps the algorithm has, the more instructions the CPU will have to perform).
The main metric is - as already mentioned - the number of steps that the algorithm will perform in relation to the "size" of its input (s). To understand this, I recommend reading the "What is the complexity of an algorithm?" This analysis is a first step in determining efficiency, but it is far from being a complete analysis (for example, QuickSort - worst case O(n^2)
- usually performs better in the average case than several algorithms with worst case O(n*log(n))
).
Another metric is the Reference Location - especially if the volume of data is high, access to the resources needed to implement the algorithm (eg, both primary and secondary memory) has a significant influence on performance. If an algorithm has an access pattern that fits well with the architecture used (eg an algorithm that accesses data close to each other in an architecture that uses heavy cache) then its implementation must be more efficient than another that has a irregular access pattern, even if the number of steps is the same.
(another example of the above case are algorithms that work best if the input is ordered; often sorting the input and then using the algorithm ends up performing better than just using the algorithm - the extra cost of sorting ends up being offset by lower cost of the algorithm, making overall performance better)
There are a number of other factors to consider, which are already linked to the implementation of the algorithm rather than to the algorithm itself - the algorithm deviation patterns (see thread-safe or not), etc. Some of them can be evaluated a priori, others only a posteriori (implemented, and it was slow, we will now find out why).
Finally, you really have to deploy and measure the actual efficiency of the implementation in a given environment (eg via profiling , such as suggested by Shura16 ). Always remembering use statistics correctly in order to determine under what circumstances the algorithm behaves in one way or another (eg "if everything else is the same, but the system has more memory, what happens?"), find out why slow because the OS does not stop doing swap ") and determine if the fault is the algorithm or implementation (" ah, the algorithm uses a stack, but my stack implementation has a problem "). .
There are two great ways of quantifying the efficiency of an algorithm, the empirical method, and the analytical method.
These shapes may vary depending on the efficiency aspect you want to measure. You can evaluate the amount of memory used by the algorithms or the intensity of processor usage, but by the way you asked the question, you probably want to rate the algorithm runtime (most commonly used method).
Empirical method
Code execution time in practice. For this method there are several ways to measure, some of them depend on the programming language you are using.
Analytical Method
Represent the algorithm execution time by an order of magnitude. This method is independent of the machine being used, the operating system, or the number of programs running because only the code is used to measure the quantity. BigO Notation .
I will then recommend using Profiling tools. Each language has its tools, and may have generic tools.
Some IDE's usually have a built-in tool, but they all display a report on a program's performance.
calculus speedup
SU (p) = Ts / Tp (p)
SU = speedUp
Ts = serial time
Tp = parallel time
p = n of processes
Efficiency calculation E (p) = SU (p) / p
If SU > 1 the parallel version reduced the execution time (it was faster than the sequential one) If SU < 1 parallel version increased execution time (slower than sequential)