I have a program developed in c ++ in visual studio, which processes a huge amount of data! This program can either use float or double data, and this specification is done as follows:
typedef float/double real;
ie in the previous statement, or use double or float. However, I came across a problem for which I can not find justification. In the configurations of my project using the default floating point model: precise, the time the program takes to process the data in the case of floats is double in relation to doubles. If in the floating point model use the option fast, use floats or doubles, it takes roughly the same time, which does not surprise me. I just can not understand why the floating point model needs to be precise, the time it takes to use floats is much higher than using doubles (it takes twice as long). Thanks!