For bad and/or marginal blocks on hard disks, can anything be more effective than a single pass of zeros?

I take the following to mean that YES, 7-pass (or 3-pass) may find more bad blocks than a single pass, by the mere fact that some blocks may be, well, "iffy" and need more than one pass to be found and excluded. Please correct me if I am misinterpreting the following:

[T]he Zero Out Data option... will trigger the drive's built-in Spare Bad Blocks routine.... [I]f you're going to be committing important data to the drive, you may wish to run... a drive stress test... to exercise the drive, by writing and reading data from as many locations as possible for as much time as you can spare.... [A]ny weak spot will show itself now instead of sometime down the road.

From:

Revive a Hard Drive for Use With Your Mac

Scanning for Bad Blocks

This next step will check every location of the drive and determine that each section can have data written to it, and the correct data read back. In the process of performing this step, the utilities we use will also mark any section that is unable to be written to or read from as a bad block. This prevents the drive from using these areas later....

When Disk Utility uses the Zero Out Data option, it will trigger the drive's built-in Spare Bad Blocks routine as part of the erasure process....

[I]f you're going to be committing important data to the drive, you may wish to run one more test. This is a drive stress test, sometimes referred to as a burn-in. The purpose is to exercise the drive, by writing and reading data from as many locations as possible for as much time as you can spare. The idea is that any weak spot will show itself now instead of sometime down the road.

There are a few ways to perform a stress test, but in all cases, we want the entire volume to be written to and read back. Once again, we will use two different methods.

Stress Test With Disk Utility

When Disk Utility uses the DOE-compliant 3-pass secure erase, it will write two passes of random data and then a single pass of a known data pattern. [Or 7-pass will write over data 7 times.] ... Once the erasure is complete, if Disk Utility shows no errors, you're ready to use the drive knowing it's in great shape.


Thanks – that's the first write, which is reportedly not enough. In an edition to your answer, can you explain why in alternative advice, there's emphasis on the first re-write Blockquote

I'm not sure I'm following your question.

When a drive detects an unwritable, or bad sector, it's supposed to mark the unwritable block as bad, then re-map the bad sector to a spare sector. It makes no difference what type of data is contained in the data to be written to the drive. This will happen until the amount of spare sectors are used up. After they are, then with most drives, the bad sector is simply left in place and the drive should be considered bad.

From the post from your Apple link, I would tend to agree more with Martin Joseph than some of the others. If the number of bad sectors is increasing with any degree of regularity, then it likely means bearings are failing in either the actuator or the drive motor. This allows too much play in the mechanisms and results in more and more head crashes as the problem gets worse, which causes more and more bad sectors. Either the drive will fail from mechanical problem (bad bearings,) the heads will eventually become damaged, or the platters will become so damaged no more spare sectors will be available and they'll just be left on the drive platters to cause more and more problems.

If the bad sector problem is due to a one time head crash (impact or a small particle entering the drive chamber) it's quite possible that the drive may be corrected by "zeroing" and may last years.

The only time a multi-pass overwrite would be needed that I could think of would be to "clear" weak sectors. The amount of time it takes to read a weak sector should raise a flag to the controller, but if the amount of time never exceeds the drive's threshold it won't be re-mapped. With weak sectors, the amount of time to read it may vary enough that at one point it might actually, finally flag the sector as failed and re-map it.

The process of "zeroing" the drive is really something of a misnomer. You could fill the blocks with "Hello, World!" data and it should still remap the bad sectors if spares are available. The reason people suggest using the "zeroing" technique is because it's built into Disk Utility, not as a drive repair procedure but as a security procedure. It just incidentally happens to force the controller to remap bad sectors when it encounters them.

I would suggest the term "re-write" refers to the fact that the sector once held data and now, in the zeroing or overwriting process it's being re-written.


Theoretically one pass should suffice. The way it works is like this:

  1. The controller will detect the bad sector during an attempted write.
  2. The write fails.
  3. The controller removes that sector from the list of available sectors and points it to a spare sector, if any are left on the drive.

The problem you might run into will occur if the sector is a weak sector vs. a bad sector. A weak sector is one that can be read and written to but the signal the controller detects is weak, and it takes a lot of attempts to read such a sector but not necessarily write to it. The controller may not mark this sector as bad, even though it may end up bottlenecking your system if it's in a file that's read frequently.

The number of passes shouldn't make any difference. The fact that zeros are being written is irrelevant. It could be any data. Zeros are selected because as a security option it's writing a binary 0 into all the empty blocks.