Will my EC2 Spot Instance Volumes Die with the Instance?
Consider my high-CPU spot instance with five drives. Drive c: is the boot drive. Drive d: is an additional volume that I created and manually attached to the instance. Volumes e:, f:, g:, and h: are ephemeral local drives.
Let's say the spot instance gets killed because the spot price exceeds my max bid. I'd like to know what happens to the data on the drives.
The data on drives e: through h: most certainly will evaporate when the instance dies. But what about the data on c: and d:? There's nothing critical there that couldn't be recreated but I'd like to keep the data if possible. I can see the c: and d: volumes in the Volumes tab of my console. Will they simply vanish when the instance dies?
Some forum posts indicate that there's some kind of "don't-delete-this-volume-on-shutdown" flag that can be set but I don't see it in my console. How do I set this flag? I'd prefer a solution that uses the console exclusively instead of the command line (if possible).
When an instance is terminated:
- all data on instance-storage will be lost.
- all EBS volumes that are attached which are set to "Delete on Terminate" will be deleted.
- all EBS volumes that are attached which are set to NOT "Delete on Terminate" will be left unattached to any instance. You can then attach them to another instance and get at your data.
By default, when an instance is launched, the "root" volume is set to "Delete on Terminate". This means that, by default, the root volume will be deleted when your spot instance terminates, unless you explicitly change the "Delete on Terminate" flag for your root volume.
By default, when you attach a secondary volume to an instance, the "Delete on Terminate" flag is NOT set. This means that, by default, that secondary volume will not be deleted when your spot instance terminates, unless you explicitly change the "Delete on Terminate" flag.
As far as Management Console options to change this flag, your only options are:
- During launch of your instance, you can attach additional volumes to your new instance. During this time, you can specify the "Delete on Terminate" flag. The default is ON.
- During requesting of your spot instance, you have the same options as #1.
Otherwise, you must use command line tools or APIs to modify this flag for an existing volume. The API to use is ModifyInstanceAttribute. For a command line option, you can use ec2-modify-instance-attribute.
To see this flag for your volume, select your instance, find the "Block Devices" parameter in the details. You should see links like "sda1", etc. Click it and a small window appears displaying various information, part of which is the status of the "Delete on Terminate" flag.
Some forum posts indicate that there's some kind of "don't-delete-this-volume-on-shutdown" flag that can be set but I don't see it in my console. How do I set this flag? I'd prefer a solution that uses the console exclusively instead of the command line (if possible).
From the console, when you request an Instance you'll go through a number of steps.
One of them will be "Storage Device Configuration" ... it's part of the "Instance Details" step.
If you click on "Edit" you'll be able to configure the disks. The option you're looking for is "Delete on Termination".
It depends on if you're using EBS backed storage for your drives for ephemeral. If they are EBS then when your instance is "stopped" the data will remain. If they are ephemeral then the data is gone.
This is also dependent on if you set your spot instance to stop or to terminate on stop. Terminate destroys everything.
Ideally you have created your own ami and are launching it as a spot instance. If that is the case then everything that is baked into your ami will be there when you launch a new instance.