What is the fastest, most memory-efficient type?

4

I'm doing a C navet game, so I need to put a large number of projectiles in a vector.

These projectiles have at least one position and speed to do calculations, and I'm trying to figure out how best to save them in memory for later use.

So there are some questions:

What is the fastest C language type for calculations in modern processors? What is the fastest type that causes the least mess in aligning a structure and wastes less memory?

This includes variations of types (not int vs float, but also int8_t, uint_fast32_t, double, long double, etc ...)

    
asked by anonymous 30.01.2014 / 01:06

4 answers

8

If you are storing this data in a contiguous memory region (eg an array), then using the type with the smallest size that still "fits" your data would have the best performance. In modern processors, the bottleneck is not in the instructions, but in the cache : a cache miss in the L1 "wastes" 10-40 cycles, in the L2 more than 600. If you can reduce the number of misses

30.01.2014 / 01:52
8

First and foremost, be careful not to do premature optimizations because it is very easy to write a complex and "optimized" program that is actually slower than a simpler and more intuitive program. In doubt use int same is go take care of performance only when you identify a bottleneck in your tests.

But for a more complete answer, the most important thing to keep in mind when choosing an integer type is the rules of coercion when you convert from one type to another and the rules of what happens in case of overflow (very large or small numbers)

  • For local variables and return parameters and return values, use int . This will compile to use the integer type and "default" arithmetic operations of your machine.

    This is true even if you are doing accounts with characters. Not only can you use EOF without overflow, but you avoid creating a bunch of extra instructions to truncate the intermediate values of the accounts back to 8 bits. For example, note how the isalpha functions, etc., of the standard library receive int parameters instead of char.

  • unsigned numbers are useful if you want well-defined overflow behavior (wrap-around) or if you are working with bit masks. Other than that, avoid using them, even if you know that the value you care about is always positive. -1 give underflow is a source of a lot of headache (for example, for(unsigned i=N; i>=0; i--) becomes an infinite loop).

  • If you are storing a bunch of values in a vector the width of the integers becomes more important (a vector of char spends much less space than an int vector) and the performance of the operations is less important (you will extract these values to a local register / variable before doing any account on them.)

  • Avoid fixed size types such as int8_t etc unless you are doing something that requires an integer exactly that size. Not only are these types filled with extra casts (see the previous example of int vs. char) as these casts can be more expensive if size is not something common in your architecture.

  • If your integer values can be very large, at a point where 32 bits or 64 bits makes a difference you have to take more care. You have more than one alternative you can use (long long, bignum library, vector of smaller integers, etc.) and the best solution will depend on your problem, compiler, etc.

30.01.2014 / 02:35
3

In this case, it is not the question of faster, but a question of which type makes more sense.

In general, working with integers will always be more efficient than working with letters, but from a programming point of view, unless you are working with something that requires extreme performance, it is best to use the one that is most convenient.

In C, when it makes sense, it is useful for example to use struct for more complex data rather than just putting it from just using an array with vectors. A struct, for example, can store information as the last data accessed from a list of an array (or bound list), and though it may not be as efficient as using pure data, it can be extremely efficient when the amount of data is bigger, because you would already have curled the data, instead of having to sweep the whole list looking for the data that wanted.

Okay, you want to know which one is faster? Well, in doubt, it tends to be faster given that it occupies less data in memory, or a data that can be operated by bitwise type operations. Even in this case, it's not just the data that matters, but the way the data is going to be used.

    
30.01.2014 / 01:16
2

For integers, the most often indicated is the processor word size number. It will line up right in the memory all over. Since this changes from compiler to compiler, the most guaranteed is to use ptrdiff_t and size_t , standard typedefs for integer and unsigned types respectively, set to stddef.h . For floating point, you do not have to use double . In any modern processor it is efficient, and it is not justified to use float .

    
30.01.2014 / 01:20