How can I test the full capacity of an SD card in Linux?

I purchased a 64 GB SD card from eBay. It works fine when I burn an Arch Linux ARM image to it and use it to boot up my Raspberry Pi.

However, when I try to create a single ext4 partition on it to use all capacity of the card, errors occur. mkfs.ext4 always finishes happily; however, the partition cannot be mounted, always throwing an error and dmesg shows kernel messages includes Cannot find journal. This has proved to be the case on at least two platforms: Arch Linux ARM and Ubuntu 13.04.

On the other hand, I can create and mount a FAT32 partition without error (a full capacity check has not been done).

I heard that some bad guys can change the SD card interface to report a wrong capacity to the OS (i.e. the card is really only 2 GB but it reports itself as a 64 GB) in order to sell the card at a better price.

I know that tools like badblocks exist for me to check the SD card for bad blocks. Can badblocks detect problems like this? If not, what other solutions exist for me to test the card?

I'd ideally like to know whether I was cheated or not; if the result shows I just received a bad item, I can return to the seller only, rather report to eBay that somebody tried to cheat me.

UPDATE

operations and messages:

~$ sudo mkfs.ext4 /dev/sde1
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
4096000 inodes, 16383996 blocks
819199 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
500 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000, 7962624, 11239424

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

~$ dmesg | tail
...
[4199.749118]...
~$ sudo mount /dev/sde1 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/sde1,
   missing codepage or helper program, or other error
   In some cases useful info is found in syslog - try
   dmesg | tail  or so

~$ dmesg | tail
...
[ 4199.749118]...
[ 4460.857603] JBD2: no valid journal superblock found
[ 4460.857618] EXT4-fs (sde1): error loading journal

UPDATE

I have run badblocks /dev/sde but it reports no error. That means the remain causes are:

  • The SD car is good but for some reason mke2fs or mount or the kernel have a bug that causes the problem.

  • I was cheated in a way that badblocks that cannot detect the defeat. This is plausible because I think badblocks is just doing some in-place write-read test. However, the cheater can make the access to outbound areas link back to some inbound block. In this case a in-place write-read check is not able to detect the problem.

If there is no application can do the proper test, I think I can try to write a simple C program to test it.


If anyone sees this later: Someone wrote an open source tool called "F3" to test capacity of SD cards and other such media. It can be found on the project hompage and in Github.


The cheating has now been confirmed by the following steps:

  • Generate a random data file.  (4194304 = 4 × 1024 × 1024 = 4 MiB, total size = 40 × 4 MiB = 160 MiB)

    Command:

    dd if=/dev/urandom of=test.orig bs=4194304 count=40
    40+0 records in
    40+0 records out
    167772160 bytes (168 MB) copied, 11.0518 s, 15.2 MB/s
    
  • Copy the data to the SD card.  (2038340 × 4096 = 8153600 KiB = 7962.5 MiB)

    Command:

    sudo dd if=test.orig of=/dev/sde seek=2038399 bs=4096
    40960+0 records in
    40960+0 records out
    167772160 bytes (168 MB) copied, 41.6087 s, 4.0 MB/s
    
  • Read the data back from the SD card.

    Command:

    sudo dd if=/dev/sde of=test.result skip=2038399 bs=4096 count=40960
    40960+0 records in
    40960+0 records out
    167772160 bytes (168 MB) copied, 14.5498 s, 11.5 MB/s
    
  • Show the result

    Command:

    hexdump test.result | less
    ...
    0000ff0 b006 fe69 0823 a635 084a f30a c2db 3f19
    0001000 0000 0000 0000 0000 0000 0000 0000 0000
    *
    1a81000 a8a5 9f9d 6722 7f45 fbde 514c fecd 5145
    
    ...
    

What happened? We observed a gap of zeros. This is an indicator that the random data have not been actually written to the card. But why do the data come back after 1a81000? Obviously the card has an internal cache.

We can also try to investigate the behaviour of the cache.

hexdump test.orig | grep ' 0000 0000 '

provides no result, which means that the generated rubbish does not have such a pattern. However,

hexdump test.result | grep ' 0000 0000 '
0001000 0000 0000 0000 0000 0000 0000 0000 0000
213b000 0000 0000 0000 0000 0000 0000 0000 0000
407b000 0000 0000 0000 0000 0000 0000 0000 0000
601b000 0000 0000 0000 0000 0000 0000 0000 0000

have 4 matches.

So this is why it passes badblocks check.  Further tests can show that the actual capacity is 7962.5 MB, or slightly less than 8 GB.

I conclude that this is very unlikely to be just random hardware failure, but more likely to be a kind of cheating (i.e., fraud).  I would like to know what action I can take to help other victims.

Update 11/05/2019

  • People asked me about how do I figured out the correct seek parameter is 2038399. I did a lot more experience than I have shown in the above. Basically you have to guess in the first place. You have to guess a proper size of data, and you have to guess where the data corruption was. But you can always use bisection method to help.

  • In the comment below I thought I was assumed that the second step above (copy the data to SD card) only copies 1 sector. But I was not make this mistake in my experiement. Instead, the seek was to show that in the "show result" step the offset 1000 is simply happen in the second sector of the data. If the seek is 2038399 sectors, the corruption is at the 2038400th sector.


First of all, read the F3 answer by @Radtoo. It is the correct way.

I have somehow missed it, and tried my own way:

  1. create 1gb test file: dd if=/dev/urandom bs=1024k count=1024 of=testfile1gb

  2. write copies of that file to sdcard (64 is sdcard size in gb): for i in $(seq 1 64); do dd if=testfile1gb bs=1024k of=/media/sdb1/test.$i; done

  3. check md5 of files (all but the last, incomplete, should match): md5sum testfile1gb /media/sdb1/test.*