How does a computer differentiate '\0' (null character) from "unsigned int = 0"?

If in a given situation, you have an array of chars (ending of course with the null character) and just after that, in the immediate next position in memory, you want to store 0 as an unsigned int, how does the computer differentiate between these two?


Solution 1:

It doesn't.

The string terminator is a byte containing all 0 bits.

The unsigned int is two or four bytes (depending on your environment) each containing all 0 bits.

The two items are stored at different addresses. Your compiled code performs operations suitable for strings on the former location, and operations suitable for unsigned binary numbers on the latter. (Unless you have either a bug in your code, or some dangerously clever code!)

But all of these bytes look the same to the CPU. Data in memory (in most currently-common instruction set architectures) doesn't have any type associated with it. That's an abstraction that exists only in the source code and means something only to the compiler.

Edit-added: As an example: It is perfectly possible, even common, to perform arithmetic on the bytes that make up a string. If you have a string of 8-bit ASCII characters, you can convert the letters in the string between upper and lower case by adding or subtracting 32 (decimal). Or if you are translating to another character code you can use their values as indices into an array whose elements provide the equivalent bit coding in the other code.

To the CPU the chars are really extra-short integers. (eight bits each instead of 16, 32, or 64.) To us humans their values happen to be associated with readable characters, but the CPU has no idea of that. It also doesn't know anything about the "C" convention of "null byte ends a string", either (and as many have noted in other answers and comments, there are programming environments in which that convention isn't used at all).

To be sure, there are some instructions in x86/x64 that tend to be used a lot with strings - the REP prefix, for example - but you can just as well use them on an array of integers, if they achieve the desired result.

Solution 2:

In short there is no difference (except that an int is 2 or 4 bytes wide and a char just 1).

The thing is that all modern libaries either use the null terminator technique or store the length of a string. And in both cases the program/computer knows it reached the end of a string when it either read a null character or it has read as many characters as the size tells it to.

Issues with this start when the null terminator is missing or the length is wrong as then the program starts reading from memory it isn't supposed to.