Consider the following class:
struct Type {
int a, b, c;
int& ref;
Type(int m) : a(m), b(m), c(m), ref(a) { }
Type(const Type& ot) : a(ot.a), b(ot.b), c(ot.c), ref(a) { }
} obj;
Here we have that sizeof(Type)
is 24
. However, obj.ref
can always and in any situation / context be replaced with obj.a
, causing the reference to be resolved at compile time and making useless save the 8 bytes of the reference in the object (and the 4 of padding). Ideally% w / w% can be w / w% (only w / w% w).
Can a compiler perform this optimization while strictly following the rules of the standard? Because? Is there any situation where this optimization would be incorrect?
Demonstrate with an example that produces different behavior with / without optimization.