Even though my mother and some programmers find that the computer makes its own decisions, it is only able to do what humans determine.
Of course a computer can produce wrong results without a human erring using it. But this means that a human projected the computer or at least some component of it in the wrong way. Either the specification has determined that the error will be possible, and whoever is using that hardware needs to be aware of this and take appropriate steps to do so do not bring unwanted results. In practice, errors occur more in the same software.
Programmers make many mistakes, often because they do not understand every aspect of what they are doing. This is normal. There is no one who knows everything, even if it's all about software development.
This specific "problem" occurs because of the type difference used in the code. It is not very obvious but if you look in the specification you will see that the sizeof
operator returns a value of type unsigned int
, plus specifically a size_t
and the comparison in if
is being done with signed int
or simply int
. That is, it is comparing a type that has a signal with another that does not. This is why there is an implicit conversion of the signaled type to the unsigned type. In this conversion there is a change in the interpretation of the given.
Knowing that the operator return used is an unsigned integer and that implicit cast occurs and even though a negative value when converted to an unsigned type begins counting as large as possible and goes reducing the negative - since it ignores the bit of the signal as a signal and considers it as part of the number - it is easy to understand what is happening. And the problem is just a misinterpretation of a human. The computer did as they were told.
It's easier to understand by printing the -1 unsigned:
#include <stdio.h>
int main(void) {
printf("tamanho de um inteiro: %d\n", sizeof(int));
printf("-1 com cast: %u\n", -1);
if(sizeof(int) > -1) {
printf("4 é maior que -1");
} else {
printf("-1 é maior que 4");
}
return 0;
}
So the message should be "4294967295 is greater than 4".
See running on ideone .
Some people will take advantage of this to criticize implicit conversion. In fact, it is bad when it is expected that the language will be used by programmers who do not fully understand every operation of what it is using or of who usually has inattentions. But I think it's more help that gets in the way.
In a way this refers to what I answered in another question . The fact that you do not know the type that an expression produces is the real problem of the code. Leaving everything explicit would help avoid some problems but leave the code longer and redundant adding an unnecessary detail. Ironically, this can be beneficial in languages that target programmers who do not care about details.
Conclusion
Learn the types of any subexpression you are computing. And make sure you're using the right types. In this case, if the intent is to actually compare the type size with flagged integer, then you should ensure that the sizeof(int)
result is int
through a cast . So:
#include <stdio.h>
int main(void) {
printf("tamanho de um inteiro: %d\n", (int)sizeof(int));
if((int)sizeof(int) > -1) {
printf("4 é maior que -1");
} else {
printf("-1 é maior que 4");
}
return 0;
}
See working on ideone and on CodingGround .
This question and answer couple was inspired by a Reddit discussion that I found interesting show.