My question is how does the compiler know I'm working on binary? Is there a method that can put the number so that the compiler knows that it really is a binary number?
The compiler does not know, even there is nothing to indicate this in the code.
The most important concept that needs to be learned here is that this idea of binary decimal or other notation is something that serves the human being. For the computer none of this abstractions exist. Basically everything for him is binary, everything else is a way for us to understand better.
When we write in decimal in the code we are only using a representation that is intuitive to us. In the code the number is a text, until it is compiled. When you print a number on the screen or another place is sending print text that represents what the number it has in memory. The way to organize the number in memory is already binary.
So what you see on the screen is not the number, it's just a text, so neither the compiler, nor the computer, anything in the process knows what this text is about, but shown to a human he knows. p>
So the function does not convert decimal to binary, it just seems to be doing this. It takes a number and it exists by itself (in most architectures it will have 64 bits to keep its value in memory) and it is not in decimal as it looks. There are some calculations to pick up the individual bits (in a very inefficient way), and the rest of each cycle of the algorithm will result in a number 0 or number 1, ie a reduction has been made. This number will probably be stored in 32 bits, although only 1 would suffice.
At the end it orders each of these stored numbers individually in array . There's nothing binary about it. There is an illusion that has a binary notation going on, but only a few characters (yes, they are characters) 0
and 1
always printed one after another.
I have this function that converts decimal to binary, but then how do I sum the bits, or use & (and) .. etc?
If you want to operate on the number you do not have to do anything, operate on them the way you need them, you do not have to convert anything. The bit operators operate on any number because they are all mounted with bits.
Use the & do we have to make the count with 2 decimals? Ex:
25 & 25
Or we can do
11001 & 11001
You operate on numbers, if by chance your code count is written with a text using in the form that we know decimal is irrelevant. But if that does using binary it needs to use the binary notation in the code text. What he is doing there is to apply the operator of and to the number eleven thousand and one with himself, which he will obviously give himself and it is often unnecessary to do so. If you want the binary notation in the literal written in the code it has to be:
0b11001
That you know as 25.
Probably in the end you want to do:
int num = 0b101;
If you want someone to type zeros and ones and he understands he has to do the reverse operation, he has to validate if it is one of those two typed characters (that can be converted to numbers automatically in some cases) and go adding the exponentiations to arrive in the desired number, which can be simplified with the shift operator ( <<
). Again we are talking about the difference of representation for humans and how it is represented internally for the computer.
And that's what I understood and I was able to respond.