AWS EC2 migration to new instance type with SSD Drives

Solution 1:

So, for a full answer, basically your SSD drives are ephemeral disks, and according to the AWS documentation the only way to use these ephemeral disk is to create a new instance. (The feature to attach ephemeral storage to the instance after it has been create it's not available yet)

This is from the AWS docs:

Instances that use Amazon EBS for the root device do not, by default, have instance store available at boot time. Also, you can't attach instance store volumes after you've launched an instance. Therefore, if you want your Amazon EBS-backed instance to use instance store volumes, you must specify them using a block device mapping when you create your AMI or launch your instance. Examples of block device mapping entries are: /dev/sdb=ephemeral0 and /dev/sdc=ephemeral1. For more information about block device mapping, see Block Device Mapping

Like @LinuxDevOps mentioned you have to create a snapshot of your existing instance and then create a new one attaching the SSD volumes. After you login to your new instance you can do like @ceejayoz mentioned.

List your devices:

fdisk -l

Make a file system on the devices. For example ext4

mkfs.ext4 /dev/xvdb
mkfs.ext4 /dev/xvdc

Mount the devices:

mkdir -p /mnt/xvdb; mkdir -p /mnt/xvdc
mount /dev/xvdb /mnt/xvdb
mount /dev/xvdc /mnt/xvdc

For reference: list of device names according to instance types

There also other similar answers in SF and SO. For example: Where's my ephemeral storage for EC2 Instance