Docker - scaling nginx and php-fpm seperately
Solution 1:
One solution is to add additional php-fpm instances to your docker-compose file and then use an nginx upstream as mentioned in the other answers to load-balance between them. This is done in this example docker-compose repo: https://github.com/iamyojimbo/docker-nginx-php-fpm/blob/master/nginx/nginx.conf#L137
upstream php {
#If there's no directive here, then use round_robin.
#least_conn;
server dockernginxphpfpm_php1_1:9000;
server dockernginxphpfpm_php2_1:9000;
server dockernginxphpfpm_php3_1:9000;
}
This isn't really ideal because it will require changing the nginx config and docker-compose.yml when you want to scale up or down.
Note that the 9000 port is internal to the container and not your actual host, so it doesn't matter that you have multiple php-fpm containers on port 9000.
Docker acquired Tutum this fall. They have a solution that combines a HAProxy container with their api to automatically adjust the load-balancer config to the running containers it is load-balancing. That is a nice solution. Then nginx points to the hostname assigned to the load-balancer. Perhaps Docker will further integrate this type of solution into their tools following the Tutum acquisition. There is an article about it here: https://web.archive.org/web/20160628133445/https://support.tutum.co/support/solutions/articles/5000050235-load-balancing-a-web-service
Tutum is currently a paid service. Rancher is an open-source project that provides a similar load-balancing feature. They also have a "rancher-compose.yml" which can define the load-balancing and scaling of the services setup in the docker-compose.yml. http://rancher.com/the-magical-moment-when-container-load-balancing-meets-service-discovery/ http://docs.rancher.com/rancher/concepts/#load-balancer
UPDATE 2017/03/06: I've used a project called interlock that works with Docker to automatically update the nginx config and restart it. Also see @iwaseatenbyagrue's answer which has additional approaches.
Solution 2:
You can use an upstream to define multiple backends, as described here:
https://stackoverflow.com/questions/5467921/how-to-use-fastcgi-next-upstream-in-nginx
You'd also want to have the config updated whenever new backends die/come into service with something like:
https://github.com/kelseyhightower/confd
Solution 3:
Although this post is from 2015 and I feel I am necroing (sorry community), I feel like it's valuable to add at this point in time:
Nowadays (and since Kubernetes was mentioned) when you're working with Docker you can use Kubernetes or Docker Swarm very easily to solve this problem. Both orchestrators will take in your docker nodes (one node = one server with Docker on it) and you can deploy services to them and they will manage port challenges for you using overlay networks.
As I am more versed in Docker Swarm, this is how you would do it to approach this problem (assuming you have a single Docker node):
Initialize the swarm:
docker swarm init
cd into your project root
cd some/project/root
create a swarm stack from your docker-compose.yml (instead of using docker-compose):
docker stack deploy -c docker-compose.yml myApp
This will create a docker swarm service stack called "myApp" and will manage the ports for you. This means: You only have to add one "port: 9000:9000" definition to your php-fpm service in your docker-compose file and then you can scale up the php-fpm service, say to 3 instances, while the swarm will auto-magically load-balance the requests between the three instances without any further work needed.