3Ware 9650SE is rebuilding RAID6 array with two degraded disks?
I have a RAID6 array on a 3Ware 9650SE which is degraded:
tw_cli /c0/u0 show
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
------------------------------------------------------------------------
u0 RAID-6 REBUILDING 60%(A) - - 256K 5587.9
u0-0 DISK DEGRADED - - p0 - 1862.63
u0-1 DISK OK - - p1 - 1862.63
u0-2 DISK OK - - p2 - 1862.63
u0-3 DISK OK - - p3 - 1862.63
u0-4 DISK DEGRADED - - p4 - 1862.63
u0/v0 Volume - - - - - 50
u0/v1 Volume - - - - - 5537.9
Two disks are degraded, which is the maximum for RAID6, so how can it be rebuilding?
Edit:
I noticed that the lights of the degraded disks were still blinking and the error led wasn't turned on, so they hadn't really been marked as failed.
The Rebuild is done, and now this is the output:
Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
------------------------------------------------------------------------
u0 RAID-6 INITIALIZING - 35%(A) - 256K 5587.9
u0-0 DISK OK - - p0 - 1862.63
u0-1 DISK OK - - p1 - 1862.63
u0-2 DISK OK - - p2 - 1862.63
u0-3 DISK OK - - p3 - 1862.63
u0-4 DISK OK - - p4 - 1862.63
u0/v0 Volume - - - - - 50
u0/v1 Volume - - - - - 5537.9
So apparently, a 'DEGRADED' disk can still be part of a rebuild. Am I right in assuming this should have been a state like 'DEGRADED_BUT_PART_OF_REBUILD' or 'PREVIOUSLY_DEGRADED'?
I think it's just bad terminology. In a strict sense, individual disks cannot be "degraded", so the state should have been displayed as "OK, REBUILDING" (and this is how the term "DEGRADED" should be interpreted with 3Ware controllers when applied to a disk).
That was a normal partial rebuild operation in a RAID6. Note that the 9650SE has a particularly nasty bug that will mark one disk in a RAID6 as requiring a rebuild when the system isn't shut down properly.