Why is this SSD drive failing with bad sectors, and was it predictable?
Note: this question was previously closed as off-topic. You can read the discussion. My reasons for asking it here are:
- This drive is in an offline content cache server for schools in rural Zambia.
- The servers are created from disk images, and all the content is replaceable.
- It has to be cheap because Zambian schools are budget-limited and there will be a lot of them.
- It also has to be reliable because it might be 8 hours each way on bad roads to replace.
- I'm not allowed to ask here what drives are not "ultra-cheap crap".
- So we're doing our own research and experimentation on drives that meet these criteria.
- My inability to repair the bad sectors by overwriting them (auto reallocate) defied my assumptions and I wanted to know why.
- I thought maybe a SECURITY ERASE might fix the bad sectors, but wanted opinions of others before I trash the drive.
- I thought I might have missed something in the SMART data that could have predicted the failure.
This is a Kingston 240GB SSD disk that was working fine on site for about 3 months, and has suddenly developed bad sectors:
smartctl 5.41 2011-06-09 r3365 [i686-linux-3.2.20-net6501-121115-1cw] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Device Model: KINGSTON SVP200S3240G
Serial Number: 50026B7228010E5C
LU WWN Device Id: 5 0026b7 228010e5c
Firmware Version: 502ABBF0
User Capacity: 240,057,409,536 bytes [240 GB]
Sector Size: 512 bytes logical/physical
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: 8
ATA Standard is: ACS-2 revision 3
Local Time is: Tue Mar 5 17:10:24 2013 CAT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x02) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 0) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 48) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x0021) SCT Status supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 084 084 050 Pre-fail Always - 10965286670575
5 Reallocated_Sector_Ct 0x0033 100 100 003 Pre-fail Always - 16
9 Power_On_Hours 0x0032 000 000 000 Old_age Always - 46823733462185
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 127
171 Unknown_Attribute 0x0032 000 000 000 Old_age Always - 0
172 Unknown_Attribute 0x0032 000 000 000 Old_age Always - 0
174 Unknown_Attribute 0x0030 000 000 000 Old_age Offline - 131
177 Wear_Leveling_Count 0x0000 000 000 000 Old_age Offline - 1
181 Program_Fail_Cnt_Total 0x0032 000 000 000 Old_age Always - 0
182 Erase_Fail_Count_Total 0x0032 000 000 000 Old_age Always - 0
187 Reported_Uncorrect 0x0032 000 000 000 Old_age Always - 49900
194 Temperature_Celsius 0x0022 033 078 000 Old_age Always - 33 (Min/Max 21/78)
195 Hardware_ECC_Recovered 0x001c 120 120 000 Old_age Offline - 235163887
196 Reallocated_Event_Count 0x0033 100 100 003 Pre-fail Always - 16
201 Soft_Read_Error_Rate 0x001c 120 120 000 Old_age Offline - 235163887
204 Soft_ECC_Correction 0x001c 120 120 000 Old_age Offline - 235163887
230 Head_Amplitude 0x0013 100 100 000 Pre-fail Always - 100
231 Temperature_Celsius 0x0013 100 100 010 Pre-fail Always - 0
233 Media_Wearout_Indicator 0x0000 000 000 000 Old_age Offline - 363
234 Unknown_Attribute 0x0032 000 000 000 Old_age Always - 208
241 Total_LBAs_Written 0x0032 000 000 000 Old_age Always - 208
242 Total_LBAs_Read 0x0032 000 000 000 Old_age Always - 1001
SMART Error Log not supported
SMART Self-test Log not supported
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
Now I get bad blocks in certain places on the disk:
root@iPad2:~# badblocks /dev/sda -v
Checking blocks 0 to 234431063
Checking for bad blocks (read-only test): 8394752 done, 1:15 elapsed
8394756 done, 1:21 elapsed
8394757 done, 1:23 elapsed
8394758 done, 1:24 elapsed
8394759 done, 1:27 elapsed
...
190882871one, 29:49 elapsed
190882888one, 29:53 elapsed
190882889one, 29:54 elapsed
190882890one, 29:56 elapsed
190882891one, 29:58 elapsed
done
Pass completed, 80 bad blocks found.
They appear to be repeatable, and auto reallocation fails, so they can't be fixed by writing to them:
root@iPad2:~# badblocks /dev/sda -wvf 8394756 8394756
/dev/sda is apparently in use by the system; badblocks forced anyway.
Checking for bad blocks in read-write mode
From block 8394756 to 8394756
Testing with pattern 0xaa: 8394756
done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 1 bad blocks found.
And I get errors like this in the system logs:
ata1.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
ata1.00: irq_stat 0x40000000
ata1.00: failed command: READ FPDMA QUEUED
ata1.00: cmd 60/08:00:08:30:00/00:00:01:00:00/40 tag 0 ncq 4096 in
res 51/40:08:08:30:00/00:00:01:00:00/40 Emask 0x409 (media error) <F>
ata1.00: status: { DRDY ERR }
ata1.00: error: { UNC }
ata1.00: configured for UDMA/133
sd 0:0:0:0: [sda] Unhandled sense code
sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 0:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor]
Descriptor sense data with sense descriptors (in hex):
72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
01 00 30 08
sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed
sd 0:0:0:0: [sda] CDB: Read(10): 28 00 01 00 30 08 00 00 08 00
end_request: I/O error, dev sda, sector 16789512
Buffer I/O error on device sda, logical block 2098689
ata1: EH complete
Now I don't understand why auto reallocate is failing on this disk. The smartctl
output all looks fine to me. Only 16 sectors have been reallocated, that's not many at all. I can't see any legitimate reason why this drive refuses to reallocate sectors. Is this model of SSD just broken or badly designed?
Notes:
- attribute 174 is "Unexpected Power Loss" according to Kingston's docs.
- 131 unexpected power losses is quite bad.
- attribute 187 (Reported_Uncorrect) is 49900 out of a possible maximum of 65535
- highest temperature ever is quite high at 78'C
The most interesting SMART counters are hidden by Kingston on this drive. But we can infer the number of spare sectors from from attribute 196. Reallocated_Event_Count, which has the following formula for the normalised value:
100 -(100* RBC / MRC)
RBC = Retired Block Count (Grown)
MRE = Maximum reallocation count
Since the normalised value is 100, this implies that RBC << MRE, so we are nowhere close to having exhausted all available sectors for reallocation.
Solution 1:
Cheap SSDs seem to have serious quality issues. You will find lots of users that have issues with your particular drive. However I think that the vendors also sell different drives (e.g. with other NAND chips/controllers) under the same label. So each drive may behave differently.
The SMART values give no indication that the drive would fail soon. In my experience its the same: Suddenly drive errors occur and then the disk fails.
What are your reasons why you use SSDs? I see the advantages of SSDs as there are no mechanical parts and as they are dustproof and produce less heat. However I also see a lot of disadvantages.
e.g. the number of writes to a single memory cell that even with wear-leveling may be reached quickly on a busy volume e.g. when you are using a filesystem with journaling.
And the electronics is also affected by high humidity or high temperatures - the same as with conventional hard drives.
Why not use cheaper conventional hard drives instead and (if raid is not required) ship the server with spare drives that are not connected until they are needed as a replacement (already present in the server case or mounted in a hot-swap cage so that the disk may be used in different servers). Then they could get prepared by a script in the field or from remote (if possible).
As long as conventional hard drives are not powered up, the transport to their destination can be rough...
If there are multiple school servers / permanent clients and a reliable/redundant network, maybe a distributed filesystem could also help to create a failsafe cache server (e.g. by using glusterfs).