Today's top architectures use little endianess . Is there a clear advantage to its use? Should I worry when I'm programming? In what situation?
Today's top architectures use little endianess . Is there a clear advantage to its use? Should I worry when I'm programming? In what situation?
According to a response in SE.SE little endian is more advantageous because values of various sizes can be interpreted with a valid address without any extra operations. There are those who dispute this.
Although it may be unintuitive for a human to have the most significant coming later, for the CPU it turns well.
Most of the time you do not have to worry about this. If you are manipulating bits of data then you have to worry.
I'm still waiting for answers.