Unable to login to ec2 instance after running “sudo chmod 2770 /”

Solution is to start a new instance and never do what you did again. It would be too complicated to try to properly recover all the permissions that you reset to 2770.

If you have any valuable files on the broken instance you can stop it, mount its root volume to the new instance and copy the files from there.


Update: as @GeraldSchneider points out you may be lucky if you didn't recursively change all the permissions everywhere. You'll have to start a new instance and use it to fix the root permissions back to 0755. Follow for example the instructions here: Changed AWS EC2 firewall rule and locked out of ssh (instead of Fix the firewall do sudo chmod 755 /mnt or wherever you mount the other disk).

Hope that helps :)


What you did was make every file in the filesystem a 2770 permission.

-rwxrws--- 1 username  agroup  2 Feb 19 23:07 thefilename

Thats a sticky bit in the group column, which means all files inside a directory are owned by agroup.


I've never bjorked an AWS image quite that badly. But I've seen a few probs that kill them.

FIRST Revert to your last snapshot, before hosing the file modes.

You don't do periodic snapshots?

SECOND Look at your backups. Is it going to be more or less work to rebuild the box vs restore your data from backups?

What? You don't have backups either?

Then the last ditch standard recovery method would be something like:

  1. Create a new instance from a current AMI, ideally the same distro as your broken machine. It can be something small like a t3.nano
  2. Detach the volumes from your broken machine, and attach them to the new instance as sdf, g, h... and so on
  3. Log into your new instance as root and for each of your broken instance's disks run

    fsck /dev/xvdX
    mkdir /sdX
    mount /dev/xvdX /sdX
    cd /sdX
    ls -l
    
  4. At this point you need to decide whether its worth using chmod over and over to fix your problem, or whether you copy the data to your new instance and set it up over again.

  5. So manually change into each directory, and chmod each file to what it should be. Keep two windows open and compare the live host's files with the broken disks mounted. Make sure you're changing the RIGHT files - check often!!!

  6. When you've done the lot, shut down the temp machine, detach the disks in the EC2 web gui and then reattach them to the old machine, in the same mountpoints from which they came. NOTE the root drive is attached as sda1 not sda but all other volumes are attached as sdb through z.


Either way, you should set up automated snapshots or backups, or both!

To prevent yourself doing this exact same thing again, alias chmod to

 chmod --preserve-root

But this won't protect any other directory.

Also don't use sudo in front of commands just by habit.