Why the time taken to fetch results of a query for 10,000 times from a RDS is so uneven? (experiment)
I noticed you're using the db.micro
instance. As with EC2, the micro instances are designed to be budget-friendly, but at a cost of performance. That said, you'll get a much worse performance when loading these types of servers versus the normal instances because the CPU time is given to the instance "last" compared to other instances sharing the same hardware.
To prove the point, run your tests again against a db.medium
instance and you'll find it much more consistent.