How to correct 512-byte sector MBR on a 4096-byte sector disk?

Solution 1:

Sector-size issues are becoming quite complex. Until late 2009, the vast majority of hard disks used 512-byte sectors, and that was that. In late 2009, disk manufacturers began introducing so-called Advanced Format (AF) disks, which use 4096-byte sectors. These first AF disks (and, AFAIK, all AF disks today) present an interface to the computer that shows each 4096-byte physical sector as being split up into eight 512-byte logical sectors. This conversion enables older tools, including many BIOSes, that were built with 512-byte assumptions, to continue to work. I don't know if your disk uses AF or not, but in either case, it almost certainly uses a 512-byte logical sector size, meaning that the interface to the OS should use 512-byte sectors.

Complicating matters is certain USB disk enclosures. Some of these enclosures do the reverse of what AF does: They take eight disk sectors and bundle them into one new 4096-byte sector. I'm not sure what the reasoning is behind this move, but one practical advantage is that disks larger than 2TiB can be used with the old MBR partitioning system. One major disadvantage is that a disk partitioned in one of these enclosures can not be used directly or in an enclosure that doesn't do this type of translation. Likewise, a disk prepared without this translation can't be used when it's transferred into such an enclosure. Note that this problem goes well beyond the MBR itself; your disk might identify the first partition as beginning on (512-byte) sector 2048, but if your OS were to seek to (4096-byte) sector 2048, it would not find the start of that partition! You've run into this problem. As such, your initial thought that it's the USB enclosure's fault is closer to the mark than your more recent thought that your motherboard messed it up. I've never heard of a motherboard translating sector size in this way. (Some hardware RAID devices do so, though.)

I don't know of a way to force Linux to adjust its idea of the sector size, but if you have enough disk space, doing a low-level disk copy to another disk may help. For instance:

dd if=/dev/sdb of=~/image.img

This will copy your disk from /dev/sdb (the USB disk; adjust as necessary) to the file ~/image.img. You can then use the following script to mount the image's partitions:

#!/bin/bash
gdisk -l $1 > /tmp/mount_image.tmp
let StartSector=`egrep "^   $2|^  $2" /tmp/mount_image.tmp | fmt -u -s | sed -e 's/^[ \t]*//' | head -1 | cut -d " " -f 2`

let StartByte=($StartSector*512)

echo "Mounting partition $2, which begins at sector $StartSector"

mount -o loop,offset=$StartByte $1 $3

rm /tmp/mount_image.tmp

Save the script as, say, mount_image and use it like this:

./mount_image ~/image.img 2 /mnt

This will mount partition 2 of image.img to /mnt. Note that the script relies on GPT fdisk (gdisk), which most distributions include in a package called gptfdisk or gdisk.

In the long run, a better solution is to find a way to connect the disk that won't do the sector-size translation. A direct connection to a new motherboard should do the trick; or you can probably find an external enclosure that doesn't do the translation. In fact, some enclosures do the translation on USB ports but not on eSATA ports, so if your enclosure has an eSATA port, you could try using that. I realize that these solutions are all likely to cost money, which you say you don't have, but maybe you can trade your translating enclosure for one that doesn't do the translation.

Another option that occurs to me is to try using a virtual machine like VirtualBox. Such a tool might assume a 512-byte sector size when accessing the disk device, effectively undoing the translation; or you might be able to copy the disk's contents raw (as in dd if=/dev/sdc of=/dev/sdb) within the virtual machine, which might copy the contents with compression, thus enabling the image to fit on less disk space than the original consumes.

Solution 2:

This script generalized the Rod Smith proposal, when you have a raid or a crypto. No warranty. Feel free to improve it! (Updated with latest finding about mdadm)

#!/bin/sh
#
# This script solve the following problem:
#
# 1. create a GPT partition on a large disk while attached directly via SATA
#    when the device present itself with 512 bytes of block size:
#    sd 3:0:0:0: [sda] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB)
#
# 2. try to use a SATA to USB adapter like ID 067b:2773 Prolific Technology, Inc.
#    this present the device with 4096 bytes of block size:
#    sd 19:0:0:0: [sdc] 732566646 4096-byte logical blocks: (3.00 TB/2.72 TiB)
#
# 3. The kernel is unable to read correctly the partition table with
#    the USB adaper.
#
#
# With the current tools (kernel and gdisk) in debian wheezy is
# possible to use losetup to remap the partitions to loop devices so
# you can use them as usual with any filesystem, raid or crypto
#
# I still do not know if this issue is originated by the adapter or by
# the disk and if there are any others workarounds.
#
# Known version of the software:
# $ apt-show-versions linux-image-3.2.0-4-amd64
# linux-image-3.2.0-4-amd64/wheezy uptodate 3.2.54-2
# $ apt-show-versions gdisk
# gdisk/wheezy uptodate 0.8.5-1


attach_device() {

    device="$1";

    MYTMPDIR=`mktemp -d`
    trap "rm -rf $MYTMPDIR" EXIT

    # gdisk on the device use the 4096 sector size
    # but we need to force it to 512
    # this is a knwon workaround from http://superuser.com/a/679800
    # basically we make a copy of the gpt partition table on a file
    dd if="/dev/$device" bs=16384 count=1 of="$MYTMPDIR/gpt" 2> /dev/null

    # we extract the offset and the size of each partition
    #
    # FIXME: the "+ 1" seems strange, but it is needed to get the same
    #        size value from:
    #
    #        blockdev --getsize64
    #
    #        without the "+ 1" some funny things happens, for example
    #        you will not be able to start a recognized md device:
    #
    #        md: loop1 does not have a valid v1.2 superblock, not importing!
    #        md: md_import_device returned -22
    #
    #        even if
    #
    #        mdadm --examine /dev/loop1
    #
    #        does not complaint

    gdisk -l \
     "$MYTMPDIR/gpt" 2> /dev/null | \
     awk '/^ *[0-9]/ {printf "%.0f %.0f\n", $2 * 512, ($3 - $2 + 1) * 512}' > $MYTMPDIR/offset-size

    # we create a loop device with the give offset and size
    while read line;
    do
        offset=$(printf "$line" | cut -d ' ' -f 1);
        size=$(printf "$line" | cut -d ' ' -f 2);
        losetup --verbose --offset "$offset" --sizelimit "$size" `losetup -f` /dev/$device;
    done < $MYTMPDIR/offset-size;
}

detach_device() {

    device="$1";

    for loopdevice in `losetup -a | grep "$device" | cut -d : -f 1`;
    do
        losetup --verbose --detach "$loopdevice";
    done;
}

usage() {
cat <<EOF
Usage:
- $0 -h to print this help
- $0 sda to attach the gpt partitions of sda
- $0 -d sda to detach the gpt partitions of sda
EOF
}


detach=0;

while getopts hd action
do
    case "$action" in
        d) detach=1;;
        h) usage;;
    esac
done
shift $(($OPTIND-1))

if [ $# -ne 1 ];
then
    usage;
fi

if [ "x$detach" = "x0" ]; then
    attach_device $1;
else
    detach_device $1;
fi

Solution 3:

I had this problem when I removed a 4TB disk from a WD My Book external enclosure. The problem is:

  1. the MBR partition table is off by a factor of 8 and
  2. the MBR partition table cannot handle >2TB when the sector size is 512.

Solution: Rewrite the partition table into a GPT, converting the values to use 512 byte sectors.

In my case the partition started on a 1MB offset and ended (~856kB) before the end of the disk. This is good because then it allowed for the MBR+GPT (17408 bytes) before the partition and the backup GPT (16896 bytes) at the end of the disk.

I made images of both regions just in case (using dd).

I noted the output from fdisk -l /dev/sde.

I used gdisk to delete the first partition. If you want, you can do as I did and change the align value to 8 (4096) to use as much space as possible. Then, I created a new partition with the start at 2048, and the end at the end of the disk. I'll grow the file system later.

Thankfully, the change in sector size doesn't affect the file system, LVM, or LUKS.

Solution 4:

Another, fairly straight forward way to do this is using parted's rescue function. This requires you to create a new disk label though, so it involves risks. Parted acts directly on the disk so make backups as necessary before running parted. Then start:

parted /dev/sdb

parted will tell you something along these lines when trying to read a disk with different sector size than with which the partition table was created:

Error: /dev/sdb: unrecognised disk label                                  

Use mklabel to create a new MBR or GPT according to what you previously used

(parted) mklabel
New disk label type? mbr

Then run rescue to find your old partition

(parted) rescue
Start? 0
End? 4001GB
Information: A ext4 primary partition was found at 1049kB -> 2000GB.  Do you
want to add it to the partition table?
Yes/No/Cancel? y

Repeat the rescue process if you have more partitions. You are now done.