The Maniero's answer already gives a great overview, I just want to complement the following part:
The most common if there is a very large volume of data that needs to be handled intensively is that it has some more gain by the ability to store more cached elements by using a short
than a int
.
In general, the influence of good caching on a system's performance is something to not be underestimated. Often a lot of attention is paid to local performance (casting) and forgetting the overall performance (in a miss cache ), several cycles are wasted - more than the overhead of an extra statement or two, depending on the case).
None of this contradicts the above-mentioned answer: only if the volume of data is large is there an advantage (although perhaps it disagrees with what would be "very" large). Also, it is quite different to have, for example, an array of objects:
class MeuObjeto {
Foo foo;
Bar bar;
Baz baz;
short s;
}
MeuObjeto[] array = new MeuObjeto[10000];
Or an array of short
s:
short[] array = new short[10000];
In the first case, space savings are minimal - even if the objects are in contiguous memory positions (depending on the case, they may not be) - and that is if memory alignment does not eliminate this space gap. The use of short
instead of int
will not have a significant impact on cache misses , so casting overhead will have no positive counterpart .
In the second case, the story is another: It will take you twice as long to have a cache miss if you are accessing these elements sequentially, compared to an array of int
. Even for random hits, the chance of the data you want to be in the cache is twice as high. So even though each individual operation is a little less efficient, the cycles you "save" by avoiding missions can compensate for - making the entire operation faster.
(In any case, the advice remains to avoid premature optimizations / micro optimizations)