Are all bad blocks the same on a HDD?

What could be happening is that the drive firmware is "remapping" the bad sector behind the scenes. Modern hard drives have spare sectors they keep around for this purpose. There is a limit of them and SMART data can tell you how close you are to running out on some drives.

However no one can really know what is going on without disassembling and examining the hard drive firmware.

There are utilites like badblocks for Linux that write to every sector and read it back.

Most physical data mediums have a significant error rate that is hidden using things like forward error correction codes, etc. You might have thousands of errors on your hard drive now, but since the drive writes redundant data for each sector, it never reports an error and you never know about it.


When a typical mechanical hdd encounters what it believes to be a bad sector it does one of several different things. The author behind SpinRite often talks about some of the features within a program he wrote called SpinRite.

Basically your typicall hdd is able to determine if its able to read the data by doing error checking. If it encounters an error its able to correct the errors to a certain number of bits.

What SpinRite does is basically ask the HDD if its able to read the data multiple times. If the hdd is able to read it, SpinRite moves the data to a different sector, this allows the hdd to mark the previous sector as bad eventually and this is how its able to recover data for you.

On another internet forum the prevailing opinion seems to be that there are two types of bad blocks - "soft" bad blocks (which are indeed checksum errors, and can be safely corrected with no impact on further drive longevity) and "hard" bad blocks that are micro-fissures in the drive plates, and will make the sector always unreadable, and no amount of overwriting will help that. Also, the drive can be expected to rapidly deteriorate in quality.

So either physical unrecoverable defects of the platters themselfs or known recoverable defects or better described the fact a platter can and will be 99.999% perfect.

I've got an idea about an utility similar to memtest that would do the same to a HDD (writing different patterns, possibly several times, and then reading them back to check), but I also wonder if there is a point. What if the guys on the other forum are right after all?

Besides the fact this program already exists ( SpinRite ) it sounds like you don't know enough about the inner workings of a HDD to even write this utility.

What made SpinRite so good was its ability to reverse engineer the error-checking in its early days.

If that is true, then it would be easy to distinguish between a truly bad drive, and a good one that has simply had the misfortune of an incorrect checksum. Unfortunately I'm old enough to have stopped believing in fairy tales.

Its less about checksums and more about error checking and eror correction please do more research.


I use spinrite to check hard drive for defect. So far I'm happy with it. As it disable any error correcting in hdd so that the real bad block will appear.


This answer includes speculation and opinion, and is more fun than fact.

http://www.seagate.com/staticfiles/support/seatools/user%20guides/SeaToolsDOSguide.EN.pdf written around 2010

By design, modern disk drives maintain spare sectors for reallocation purposes. Usually, sectors become difficult to read long before they become impossible to read. In this situation the actual data bytes in the sector are preserved and transferred to the new spare during a sector reallocation. Similarly, when a disk drive writes data and encounters a problem, the drive firmware retires the problem sector and activates a replacement before giving successful write status.

From the logic of this any "real" new bad sector marked out by the hardware , may very well have had a "real" problem at one time or another , and should not be re-tested and re-used , because it is a problem waiting for a place to happen, just like the original marked out bad sectors.
The space consumed by the few "bad" items is so small usually it is not worth concidering using them.

Should a re-test utility test the sector under many conditions? Yes it should, why wouldnt it, it would take minor extra time to confirm that multiple and different writes to it is readable, and would not cut into the total in the area. Adding ,why not a whole bunch of sectors in that same location seeking if there was any actual damage in the area , or just a fluke at the time?

- http://www.seagate.com/staticfiles/support/samsung/docs/M2%20Portable%20Series%20User%20Manual%20EN%20Rev00%20110428.pdf

If you make an impact on the external drive, it may cause bad sectors on the disk. Bad sectors can cause various and potentially read/write errors

The impact bad sectors always seem to show in groupings in a similar area, with added bad sectors growing from that. a person should be able to get an idea of actual head impact damages on surfaces from the grouping of them. Any actual physical damage to an area, that releaces any particals from that area, those particals have to be slung off and filtered out.
From all the other information, any head impact damage might also result in the head itself becomming worse? If there is noticable accumulations of new bad sectors in groups, I dont think I would want to rely on that drive, but my drives usually don't have a lot of (shown) new remapped sectors at all.


Power fluxuation, static induction, interferance , atomic size warble, temperature, the data density , I would wonder if the Hardware itself had a error with "some sectors" under "less than ideal" circumstances, that at any rate they would still be the "worst of the bunch". When The manufactures own tests log-out bad original sectors , do they go back and say "well the ac clicked on in the building, or we had a flux in background radiation, so lets re-test that?" :-) Or do they figure that "under Any condition" the rest of them worked this one did not?

If there was a way to determine if the sectors are damage, growing damage from suface damage or surface imperfections, it should be the grouping of the sectors marked bad. ARgggg

I think only a hard drive manufacture could correctaly answer this, there are other data sheets that go well beyond my skill levels. At seagate WD seems to stick more with the simpler.

The most I have to go on, is what I have seen and experieneced and the data they provide. There are times when the stuff is deemed bad, and I have re-used it, and it never presented a problem, I knew at the time it was a software/hardware problem of my own. If it was damage I caused I would hope it logged out all 15 tracks there to never pass by the whole area again :-). When looking inside the drives, it is beyond me, they are perfect, Following this picture we see The molecular mounds the head tries to fly over. And Magnetic force microscopy?

If a hard drive manufacture did know all the answers, then it would not explain all the angry users who recieve Re-Tested Re-manufactured drives as a replacement, and have problems with them. Some users indeed were the problem, but not all of them.

Follow this picture To see an idea of "weak heads"
0s or not , I doubt it matters a Write is very complete