EC2 instances can't reach each other

I created two EC2 instances in the same AZ and on the same account. They use different security groups. I'd like instance A to accept connections on a certain port from only instance B.

I don't believe these instances are VPC, but don't know how to confirm. I wasn't able to change the security group which makes me think they are not VPC.

In the security group for instance A I added a rule for the port and used instance B's public IP /32 for the source. I then tried to connect from instance B using instance A's public IP, but the connection attempt fails immediately.

I tried the same steps with the private IP of each instance. What am I missing?

Here's an article which answers a similar question, but VPC is involved: Can't connect to EC2 instance in VPC (Amazon AWS).

Both instances have the same VPC ID and Subnet ID.

I also tried setting the source to instance B's security group, which didn't work either.

I'm trying this with mysql. The mysql client running on instance B failed immediately with this error:

ERROR 2003 (HY000): Can't connect to MySQL server on '54.xx.xx.xx' (113)

To check there wasn't a problem with mysqld setup, I tried the same with ICMP Echo Reply which didn't work either.

Edit Thanks to initial answers I was able to confirm these two instances are running in a VPC (by going to the VPC console). So, my question is very similar to the linked article. But, in that case the problem was that the instances were not default instances so didn't have the proper route and subnet created. Here's how my VPC is set up: The VPC is default and has a route table associated with it. The route table is implicitly associated with the subnet associated with the VPC. The route table has a single route in it and the target is "local".

These are all created by default as as I understand the docs should allow two instances to connect to each other. What am I (still) missing?


Solution 1:

I resolved this with help from AWS tech support. Here's the info for future newbie's like me:

The issue was that iptables was running on instance B and not allowing any traffic. I learned that there are two levels of firewall for EC2 instances: security groups (managed at the AWS console) and iptables (managed on the host). There are reasons to use iptables, for example https://wincent.com/wiki/Using_iptables_on_EC2_instances

Most of the time you don't need to worry about using a host-level firewall such as iptables when running Amazon EC2, because Amazon allows you to run instances inside a "security group", which is effectively a firewall policy that you use to specify which connections from the outside world should be allowed to reach the instance. However, this is a "whitelist" approach, and it is not straightforward to use it for "blacklisting" purposes on a running instance.

In my case I don't need host level firewall so turned iptables off:

sudo service chkconfig stop
sudo chkconfig iptables off

Here are some results I got related to the comments posted on this question:

  • connecting with private ip worked
  • connecting with private DNS name worked
  • connecting with public ip worked
  • connecting with public EIP worked
  • connecting with public DNS worked, but as Chad Smith said in his answer DNS returns the private IP for this name

The reason this worked for me on a different instance is that the image I used in that instance didn't run iptables -- every image is different. The image I used in this case used iptables to disallow all connections except SSH.

Solution 2:

A little bit off topic, but this is the only search result for this issue.

We had a similar problem, but our existing instances were rebooted and suddenly couldn't communicate. Turns out there were too many rules in the security group - just removing some allowed communication to resume. It was still working before the reboot because the rules get added over time by automated calls to the api.

Hope this helps someone in the future.