Why is the first BIOS instruction located at 0xFFFFFFF0 ("top" of RAM)?
Solution 1:
0xFFFFFFF0
is where an x86-compatible CPU starts executing instructions when it's powered on. That's a hardwired, unchangeable (without extra hardware) aspect of the CPU and different types of CPUs behave differently.
Why the first BIOS instruction is located at the "top" of a 4 GB RAM?
It's located at the "top" of 4 GB address space - and on power-on the BIOS or UEFI ROM is set to respond to reads of those addresses.
My theory on why this is:
Just about everything in programming works better with contiguous addresses. The CPU designer does not know what a system builder will want to do with the CPU, thus it's a bad idea for the CPU to require addresses smack in the middle of the space be required for various purposes. It's better to keep that "out of the way" at the top or bottom of the address space. Of course, keep in mind this decision was made when the 8086 was new, which did not have an MMU.
In the 8086, interrupt vectors existed at memory location 0 and above. Interrupt vectors need to be at known addresses and were desired to be in RAM for flexibility - yet it was not possible for the CPU designer to know how much RAM was going to be in a system. So starting from 0 and working up made sense for those (because no system in 1978 when the 8086 was invented would have 4 Gbytes of RAM - so expecting RAM to be at 0xFFFFFFF0 was not a good idea), and then ROM would have to be at the upper boundary.
Of course, starting with at least the 80286, interrupt vectors could be moved to a different starting location other than 0, but modern 64-bit x86 CPUs still boot up in 8086 mode, so everything still works the old way for compatibility (as ridiculous as it sounds in 2015 to still need your x86 CPU to be able to run DOS).
So since interrupt vectors start from 0 and work upward, ROM would have to start from the top and work downward.
What would happen if my computer has only 1 GB of RAM?
A 32-bit CPU has 4,294,967,296 addresses, numbered 0 (0x00000000) to 4294967295 (0xFFFFFFFF). ROM can live in some addresses and RAM can live in others. With the CPU's MMU this can even be switched on the fly. RAM does not have to live at all addresses.
With only 1 GB of RAM, some addresses will not have anything responding when they are read or written to. This can cause invalid data to be read when such addresses are accessed or a system lockup.
What about systems with more than 4 GB of RAM (e.g: 8 GB, 16 GB, etc.)?
Keeping it somewhat simple: 64-bit CPUs have more addresses (which is one of the things that makes them 64-bit - e.g. 0x0000000000000000 through 0xFFFFFFFFFFFFFFFF) for example, so the extra RAM "fits". Assuming the CPU is in long mode. Until then the RAM is there, just not addressable.
Why is stack initialized with some value (in this case, a value located at 0xFFFFFFF0)?
I can't immediately find anything on what x86 assigns the stack pointer at power-on, but it would eventually have to be reassigned by an initialization routine anyway once that routine finds out how much RAM is in the system. (@Eric Towers in the comments below reports that it is set to zero on power up.)
Solution 2:
It isn't located at the top of RAM; it is located in ROM whose address is at the top of the memory address space, along with any memory on expansion cards, like Ethernet controllers. It is there so that it won't conflict with RAM, at least until you have 4 GB installed. Systems that have 4 GB or more of RAM can do two things to resolve the conflict. Cheap motherboards simply ignore the parts of RAM that conflict with where the ROM is located. Decent ones remap that RAM to appear to have an address above the 4 GB mark.
I'm not sure what you are asking about the stack. It certainly isn't initialized to be in ROM. When the CPU resets, it is initially in "real mode", where it acts just like the original 8086 and uses 16-bit segmented addressing, allowing it only to access 1 MB of memory. The BIOS code is located at top of that 1 MB. The BIOS picks somewhere in RAM to set up the stack and loads and executes the first sector of the first bootable drive. It is up to the OS to switch into 32 or 64-bit mode once it takes over and set up its own stacks (one per task/thread).
Solution 3:
First, this has nothing to do with RAM, really. We're talking about address space here - even if you only have 16 MiB of memory, you still have the full 32 bits of address space on a 32-bit CPU.
This already answers your first question, really - at the time this was designed, real world PCs had nowhere near the full 4 GiB of memory; they were more in the range of 1-16 MiB of memory. The address space was, for all intents and purposes, free.
Now, why 0xFFFFFFF0 exactly? The CPU doesn't know how much of the BIOS there is. Some BIOSes may only take a few kilobytes, while others may take full megabytes of memory - and I'm not even getting into the various optional RAMs. The CPU must be hardwired to some address to start on - there's noöne to configure the CPU. But this is only a mapping of the address space - the address is mapped directly into the BIOS ROM chip (yes, this means you don't get access to the full 4 GiB of RAM at this point if you do have that many - but that isn't anything special, many devices require their own range in address space). On a 32-bit CPU, this address gives you full 16 bytes to do the very basic initialization - which is enough to setup your segments and, if needed, address mode (remember, x86 boots in 16-bit real mode - the address space isn't flat) and do a jump to the real boot "procedure". At this point, you don't use RAM at all - it's all just mapped ROM. In fact, RAM isn't even ready to be used at this point - that's one of the jobs of the BIOS POST! Now, you might be thinking - how does a 16-bit real mode access the address 0xFFFFFFF0? Sure, there's segments, so you have 20-bit address space, but that still isn't good enough. Well, there's a trick to it - the 12 high bits of the address are set until you execute your first long jump, giving you access to the high address space (while rejecting access to anything lower than 0xFFF00000 - until you execute a long jump).
All this are the things that are mostly hidden from programmers (not to mention users) on modern operating systems. You usually don't have any access to anything so low level - some things are already beyond salvage (you can't switch CPU modes willy-nilly), some are exclusively handled by the OS kernel.
So a nicer view comes from old-school coding on MS DOS. Another typical example of device memory being directly mapped to address space is direct access to video memory. For example, if you wanted to write text to the display fast, you wrote directly to address B800:0000
(plus offset - in 80x25 text mode, this meant (y * 80 + x) * 2
if my memory serves me right - two bytes per character, line by line). If you wanted to draw pixel-by-pixel, you used a graphics mode and the start address of A000:0000
(typically, 320x200 at 8 bits per pixel). Doing anything high-performance usually meant diving into device manuals, to figure out how to access them directly.
This survives to this day - it's just hidden. On Windows, you can see the memory addresses mapped to devices in the Device manager - just open properties of something like your network card, go to the Resources tab - all the Memory Range items are mappings from device memory to your main address space. And on 32-bit, you'll see that most of those devices are mapped above the 2 GiB (later 3 GiB) mark - again, to minimize conflicts with user-useable memory, though this is not really an issue with virtual memory (applications don't get anywhere near the real, hardware address space - they have their own virtualized chunk of memory, which might be mapped to RAM, ROM, devices or the page file, for example).
As for the stack, well, it should help to understand that by default, stack grows from the top. So if you do a push
, the new stack pointer will be at 0xFFFFFEC
- in other words, you're not trying to write to the BIOS init address :) Which of course means that the BIOS init routines can use the stack safely, before remapping it somewhere more useful. In old-school programming, before paging became the de facto default, the stack usually started on the end of RAM, and "stack overflow" happened when you started overwriting your application memory. Memory protection changed a lot of this, but in general, it maintains backwards compatibility as much as possible - note how even the most modern x86-64 CPU can still boot MS DOS 5 - or how Windows can still run many DOS applications that have no idea about paging.