How much word-size memory does a memory address point to? For a 32/64-bit system [duplicate]
Without going into the technical details (which I would get wrong anyway), the computer hardware itself is designed and built so that each address refers to a byte, or 2 bytes, or 4 bytes, or whatever. The operating system has no choice in the matter; it must be written to conform to the hardware design.
Most, probably all, computers running today are byte-addressable, and a byte is 8 bits. Past designs have been different.
The number of bits in an address determines the number of addresses. The number of bits stored at an address is specified by the design; there will be one pin and one line to carry the data. Multiply those numbers together to get the maximum number of bits that can be stored.
To answer your actual question: Is it the RAM itself - yes, CPU - yes, or something else - yes, the motherboard and chipset. All of these have to be designed to work together and they all have to agree on byte size and addressing.
In the old days, processors had pins, and some of those were used to communicate with memory.
You had A pins, to specify an address, and D pins, to read or write data. A typical 8-bit processor of the early 80s would have pins D0 through D7 and A0 through A15. Meaning it could address up to 2^16 8-bit (D0 through D7) bytes of memory, or 64K. 16-bit CPUs would have sixteen D lines (the m68k is an example), and 32-bit CPUs (like the Pentium) would have thirty-two.
It's possible to have a 16-bit architecture with an 8-bit external bus, or 32-bit architecture with a 16-bit external bus - there are a multiple things that determine the "bit"-ness of a CPU and data width is just one of them. Internal architecture may still be different.
Looking at the pinout (though CPUs do not have "pins" anymore, the board has pins and the CPU now has "lands") of something recent like the Core i7 - things have changed. I'm not sure what things like DDR0_DQ[63] mean - the relation between CPU and memory is complex today due to caching, NUMA, and multi-core CPUs.
So it is a combination of the CPU architecture and the physical memory interface it uses that determines this.
what is it in a computer to determine how much memory a memory address holds?
Good entry level question.
I would say the simple answer is the tradeoff between cost and complexity.
For example, there have been one bit computers, which addressed only one bit at a time by shifting bits in and then assembling them together, (granted that was a long time ago). They keep the wiring to a minimum. And there were 4 bit wide data word computers don't go that far back. The computer I am typing on gets a word that is 64 bits wide at a time. But note that 32 bit machines do the same thing just by getting two 32 bit wide words and assembling them together as one. Also I remember the old IBM mainframe which had about 10 different ways it could address words. But most of this was done in firmware for the program's convienence, as it was actually addressing only one size of physical memory word (to keep things simple in hardware).
So the answer is that there are lots of ways to do things and lots of ways to design things, and just about every combination in between. Cost and speed are where the decisions are made.