I was looking for problems called "embarrassingly parallel", problems that do not exist dependencies between tasks, and can be divided in parallel.
Could you give me some suggestion of some algorithm?
I was looking for problems called "embarrassingly parallel", problems that do not exist dependencies between tasks, and can be divided in parallel.
Could you give me some suggestion of some algorithm?
An algorithm of this genre is the rendering of three-dimensional scenes, in which each pixel is rendered completely without depending on the rendering of other pixels.
That's why current GPUs are increasingly expanding the number of processors called shaders, which are responsible for calculating the final color of each pixel, independently of each other.
An important mention in this set is the Perlin noise , which is used to generate noise pseudo-random.
Another example is physical simulations, in which each particle has its characteristics altered taking into account the previous state of the simulation. That is, in each step, the particles do not interact with each other, but rather with the particles of the previous stage.
Also because of this, many games use the GPU to do physical simulations with a much larger number of particles, obviously the GPU has to support this.
Problems of the "Split to Conquer" category are not as parallel to small trees. But as the set of information grows, the solution becomes more and more parallel.
Examples:
add all the numbers in a giant list ... you can subdivide the problem by dividing the giant list into smaller lists and delegating the partial solution to different processors.
Find all the alternatives in a game of chess (you will have to set the maximum depth of the search since it is impossible to calculate all of them ... the number of alternatives is absurdly large)
Some classic algorithm problems that are relatively simple and good to parallelize are:
Consider the following image with a square and a circle whose diameter is the same as the square side.
Nowimaginethatthecircleisatarget.Youwillrandomlythrowdartsintothesquare.
Nowconsidertheformulabelow:
This means that if you take the number of darts that hit inside the circle, divide by the amount that hit outside the circle and multiply by four, you get an approximate Pi value.
The theory says that the approach improves according to the number of darts thrown. Soon, a parallel program can randomly "throw darts" at the square and count the number of darts in and out of the circle. At the end, simply sum the results and apply in the formula.
You want the approximate value of the integral of a function f(x)
in the range of a
to b
.
A very simple (and imprecise) way of solving this is to calculate the integral of a linear function that passes through points f(a)
and f(b)
.
See the example:
Inblueyouhavetherealfunctionandinredthelinearfunctionusedtocalculatetheapproximation.
Well,alinearfunctionwillresultinapoorapproximation.However,weincreasethenumberofpointsoftheapproximationfunction,theresultwillbemoreprecise,right?Themorepointswecalculate,thebetter.
So we can use a parallel program and divide the interval [a, b]
by the number of threads and processes and each calculate the partial integral of that section.
At the end, just add the results.