Interpretation of number of heads returned by fdisk
Solution 1:
As has been pointed out in a comment, the cylinders/heads/sectors numbers reported for physical drive geometry have no basis in reality these days. You can safely ignore those numbers.
In order to understand what's going on, we have to go back all the way to the original IBM model 5150 PC of 1981.
The 5150's native configuration was some combination of cassette and floppy disk storage. It didn't even originally support a hard disk (one could be retrofitted, but that required both a separate controller card as well as a more powerful power supply than the one it came with from the factory; plus, they were a freaking expensive luxury). With floppy disks, it makes some sense to address the media in terms of head, cylinder and sector; these quantities translate quite conveniently to a given location on the physical media, and are relatively easy to work with both in software as well as controller firmware and drive hardware. When you are almost literally paying per byte of any sort of storage, that simplicity is a really nice thing to have.
Because the 5150 had a very limited amount of memory, both RAM and storage (the base model had 16 KiB of RAM; the text of this answer would take up around half of its RAM, let alone any software to work with the text), it was important to waste as little as possible. So the engineers involved came up with a set of limits that probably seemed huge at the time: the cylinder was encoded using 10 bits, the head was encoded using 8 bits, and the sector number was encoded using 6 bits, all of which could be packed neatly into three bytes with some bitshifting magic. This allowed addressing 256 heads, 1024 cylinders per head and 64 sectors per cylinder. (In practice, the number 0 for each is not used.) With 512 byte sectors, this allows addressing a grand total of 8 GiB, an enormous amount of data in 1981. Even a more practical ten-heads drive gave you an upper bound of 320 MiB, at a time when the "high end" storage medium at the time of the 5150's introduction was 160 kilobyte floppy disks (using eight sectors per track and 40 tracks or cylinders). If we extrapolate this based on replacing those floppy disks with single-layer DVDs, which store a little over 4 GiB, rather than those 8 GiB drives we'd be looking at single drives able to store on the order of 100 TB. See any of those on the market horizon any time soon? I didn't think so.
A complicating factor was that the original IDE standard used a different CHS encoding. It used 28 bits for CHS, encoding the cylinder as 16 bits, the head as 4 bits and the sector as 8 bits. Since everybody loves compatibility, by taking the largest of both of these, we get 10 cylinder bits, 4 head bits and 6 sector bits. Since sector 0 is not used, this allows addressing 1,032,192 sectors which at 512 bytes each works out to 504 MiB. There's the first limitation hard drives encountered: the 500 MB barrier. By using the full IDE CHS scheme with 512 byte sectors and not caring about IBM CHS compatibility, you could address 127.5 GiB.
Also, 256 head drives never really caught on in the market. (Even today, most hard disk drives use less than half a dozen platters, with one head for each side of each platter.) So someone thought about the trick of borrowing unused bits and repurpose them. Hence, a drive could present itself having four times as many heads as it really does, but in reality use those two extra bits to address cylinders instead. The firmware can do the translation easily enough as it doesn't need to be constrained in the same way as the original 5150 CHS addressing format, let alone the lowest common denominator of IDE/ATA and IBM CHS addressing. However, in order to remain compatible, the drive must still present a CHS geometry. And so LBA-assisted translated geometry was born. This allowed the full theoretical 8 GiB addressing range of the CHS scheme to be used, but gave us monstrocities like drives reporting that they have hundreds of heads just so that there was some way to address all the drive's sectors through the CHS geometry.
CHS survived in some manner until the late 1990s, when common hard disk drive sizes started creeping up toward its theoretical limits. At that point, it became obvious that something completely different was needed. Drives were also becoming more advanced, and around that time drives started having the ability to, transparently to the operating system, remap bad sectors (this had previously been a responsibility of the operating system's file system code, and is the reason why for example MS-DOS 6.x "ScanDisk" had the ability to do a physical read/write sweep over the entire drive). Particularly when you get into transparent sector remapping territory, the entire concept of CHS addressing becomes meaningless because there is nothing saying that the CHS address you ask for has anything at all to do with the one that ends up being used.
The idea was born to simply address the drive as a bundle of sectors. SCSI had had this for some time, initially using 21-bit LBA but later moving to 32-bit LBA, but the PC had been thoroughly stuck in the CHS camp even though 22-bit LBA had been an option even in the original IDE standard dating back to the mid-1980s. This is LBA, or Logical Block Addressing. The for our purposes relevant initial standardized LBA mode was LBA-28, which allowed addressing 128 GiB of 512 byte sectors; in ATA-6 it was superceded by LBA-48. All modern (most post-1996) drives are LBA, as well as the operating system knowing about drives being able to have either 512 byte or 4096 byte sectors. When the operating system's disk driver wants some amount of data, it issues a read request for sector number N, or the interval sector numbers M through N. The physical drive is then free to translate this to whatever internal geometry it uses, by whatever means it chooses to use. This includes mapping to physical platters or chips, and locations within those, as well as transparent sector remapping. The OS doesn't ever need to know or concern itself with those details. That's why you can just plug in a SSD and it simply works, despite the whole concept of cylinders, heads and sectors making absolutely no sense whatsoever with solid-state storage. We are currently using 48-bit LBA addressing, which allows addressing 2^48 (about 3*10^14) sectors. With 512 byte sectors, that allows addressing 128 PiB; the move to 4,096 byte sectors has theoretically raised this further to 1 EiB, although that would probably require a change to the device communications protocol. Even so, 128 PiB is so far beyond what is practically doable with current technology that this is unlikely to be a problem in the near future at least. (You'd need about 20,000 of the newfangled 6 TB drives all striped into a single array to come even close.)
Modern drives also expose S.M.A.R.T. self-monitoring data, so that if the OS or an application does care, it can find out some about what's going on with regards to for example sector remapping. But there's no need for the OS to look at that information, because (in theory, anyway) it's all handled internally by the drive.