Processor, OS : 32bit, 64 bit

I am new to programming and come from a non-CS background (no formal degree). I mostly program winforms using C#.

I am confused about 32 bit and 64 bit.... I mean, have heard about 32 bit OS, 32 bit processor and based on which a program can have maximum memory. How it affects the speed of a program. There are lot more questions which keep coming to mind.

I tried to go through some Computer Organization and Architecture books. But, either I am too dumb to understand what is written in there or the writers assume that the reader has some CS background.

Can someone explain me these things in a plain simple English or point me to something which does that.

EDIT: I have read things like In 32-bit mode, they can access up to 4GB memory; in 64-bit mode, they can access much much more....I want to know WHY to all such things.

BOUNTY: Answers below are really good....esp one by Martin. But, I am looking at a thorough explanation, but in plain simple English.


It really all comes down to wires.

In digital circuits, only 0's and 1's (usually low voltage and high voltage) can be transmitted from one element (CPU) to another element (memory chip). If I have only 1 wire, I can only send either a 1 or a 0 over the wire per clock cycle. This means I can only address 2 bytes (assuming byte addressing, and that entire addresses are transmitted in just 1 cycle for speed!).

If I have 2 wires, I can address 4 bytes. Because I can send: (0, 0), (0, 1), (1, 0), or (1, 1) over the two wires. So basically it's 2 to the power of # of wires.

So if I have 32 wires, I can address 4 GB, and if I have 64 wires, I can address a lot more.

There are other tricks that engineers can do to address a larger address space than the wires allow for. E.g. splitting up the address into two parts and sending one half in the first cycle and the second half on the next cycle. But that means that your memory interface will be half as fast.

Edited my comments into here (unedited) ;) And making it a wiki if anyone has anything interesting to add as well.

Like other comments have mentioned, 2^32 (2 to the power of 32) = 4294967296, which is 4 GB. And 2^64 is 18,446,744,073,709,551,616. To dig in further (and you probably read this in Hennesey & Patterson) processors contains registers that it uses as "scratch space" for storing the results of its computations. A CPU only knows how to do simple arithmetic and knows how to move data around. Naturally, the size of these registers are the same width in bits as the "#-bits" of architecture it is, so a 32-bit CPU's registers will be 32-bits wide, and 64-bit CPU's registers will be 64-bits wide.

There will be exceptions to this when it comes to floating point (to handle double precision) or other SIMD instructions (single-instruction, multiple data commands). The CPU loads and saves the data to and from the main memory (the RAM). Since the CPU also uses these registers to compute memory addresses (physical and virtual), the amount of memory that it can address is also the same as the width of its registers. There are some CPUs that handles address computation with special extended registers, but those I would call "after thoughts" added after engineers realize they needed it.

At the moment 64-bits is quite a lot for addressing real physical memory. Most 64-bit CPUs will omit quite a few wires when it comes to wiring up the CPU to the memory due to practicality. It won't make sense to use up precious motherboard real estate to run wires that will always have 0's. Not to mention in order to have the max amount of RAM with today's DIMM density would require 4 billion dimm slots :)

Other than the increased amount of memory, 64-bit processors offer faster computation for integer numbers larger than 2^32. Previously programmers (or compilers, which is also programmed by programmers ;) would have to simulate having a 64-bit register by taking up two 32-bit registers and handling any overflow situations. But on 64-bit CPUs it would be handled by the CPU itself.

The drawback is that a 64-bit CPU (with everything equal) would consume more power than a 32-bit CPU just due to (roughly) twice the amount of circuitry needed. However, in reality you will never get equal comparison because newer CPUs will be manufactured in newer silicon processes that have less power leakage, allow you to cram more circuit in the same die size, etc. But 64-bit architectures would consume twice as much memory. What was once considered "ugly" of x86's variable instruction length is actually an advantage now compared to architectures that uses a fixed instruction size.


Let's try to answer this question by looking at people versus computers; hopefully this will shed some light on things for you:

Things to Keep In Mind

  • As amazing as they are, computers are very, very dumb.

Memory

  • People have memory (with the exception, arguably, of husbands and politicians.) People store information in their memory for later use.
    • With a question (e.g, "What is your phone number?") a person is able to retrieve information to give an answer (e.g., "867-5309")
  • All modern computers have memory, and store information in their memory for later use.
    • Because computers are dumb, they can only be asked a very specific question to retrieve information: "What is the value at X in your memory?"
      • In the question above, X is known as an address, which can also be called a pointer.

So here we have a fundamental difference between people and computers: To recall information from memory, computers need to be given an address, whereas people do not. (Well in a sense one could say "your phone number" is an address because it gives different information than "your birthday", but that's another conversation.)

Numbers

  • People use the decimal number system. That means for every digit in a decimal number, the digit can be one of 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9. People have ten options per digit.
  • All modern computers use the binary number system. That means for every digit in a binary number, the digit can only be either 1 or 0. Computers have two options per digit.
    • In computer jargon, a single binary digit is called a bit, short for binary digit.

Addresses

  • Every address in a computer is a binary number.
  • Every address in a computer has a maximum number of digits (or bits) that it can have. This is mostly because the computer's hardware is inflexible (also known as fixed) and needs to know ahead of time that an address will only be so long.
  • Terms like "32-bit" and "64-bit" are talking about the longest address for which a computer can store and retrieve information. In English "32-bit" in this sense means "This computer expects instructions about its memory to have addresses no more than 32 binary digits long."
    • As you can imagine, the more bits a computer can handle the longer the address it can look up and therefore the more memory it can manage at one time.

32-bit v. 64-bit Addressing

  • For an inflexible (fixed) number of digits (e.g. 2 decimal digits) the possible numbers you can represent is called the range (e.g. 00 to 99, or 100 unique numbers). Adding an additional decimal digit multiplies the range by 10 (e.g. 3 decimal digits -> 000 to 999, or 1000 unique numbers).
  • This applies to computers, too, but because they are binary machines instead of decimal machines, adding an additional binary digit (bit) only increases the range by a factor of 2.

    Addressing Ranges:
    • 1-bit addressing lets you talk about 2 unique addresses (0 and 1).
    • 2-bit addressing lets you talk about 4 unique addresses (00, 01, 10, and 11).
    • 3-bit addressing lets you talk about 8 unique addresses (000, 001, 010, 011, 100, 101, 110, and 111).
    • and after a long while... 32-bit addressing lets you talk about 4,294,967,296 unique addresses.
    • and after an even longer while... 64-bit addressing lets you talk about 18,446,744,073,709,551,616 unique addresses. That's a LOT of memory!

Implications

What all this means is that a 64-bit computer can store and retrieve much more information than a 32-bit computer. For most users this really doesn't mean a whole lot because things like browsing the web, checking email and playing Solitaire all work comfortably within the confines of 32-bit addressing. Where the 64-bit benefit will really shine is in areas where you have a lot of data the computer will have to churn through. Digital signal processing, gigapixel photography and advanced 3D gaming are all areas where their massive amounts of data processing would see a big boost in a 64-bit environment.