SSH to EC2 instance timedout
Solution 1:
I finally figured out the issue and I got access to my EC2 instance again with all the data as I left as it is.
The reason behind this issue is that in order to allow http traffic in a new port I used ufw
which enables the firewall, and the rule for allowing ssh is not included in ufw
which causes losing access. I could have used aws security groups and added the right rule to avoid all of this.
The solution was to create a new EC2 instance and mount the volume of the old EC2 instance to this new created instance.
list the available disks as follows:
buntu@ip-172-31-27-78:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 97.8M 1 loop /snap/core/10185
loop1 7:1 0 28.1M 1 loop /snap/amazon-ssm-agent/2012
nvme0n1 259:0 0 120G 0 disk
└─nvme0n1p1 259:1 0 120G 0 part /
nvme1n1 259:2 0 120G 0 disk
└─nvme1n1p1 259:3 0 120G 0 part
After this mount your partition to any directory:
$ sudo mkdir /data
$ sudo mount /dev/nvme1n1p1 /data/
Now you will be able to access your volume files, in order to allow ssh access edit the the files user.rules
and user6.rules
located in the directory /data/etc/ufw
and add these lines:
#user.rules
-A ufw-user-input -p tcp --dport 22 -j ACCEPT
-A ufw-user-input -p udp --dport 22 -j ACCEPT
user6.rules
-A ufw6-user-input -p tcp --dport 22 -j ACCEPT
-A ufw6-user-input -p udp --dport 22 -j ACCEPT
Kudos to this post who helped me a lot, and I collected all the steps here.