Docker - scaling nginx and php-fpm seperately

I've been playing around with docker and docker-compose and have a question.

Currently my docker-compose.yml looks like this:

app:
    image: myname/php-app
    volumes:
        - /var/www
    environment:
        <SYMFONY_ENVIRONMENT>: dev

web:
    image: myname/nginx
    ports:
        - 80
    links:
        - app
    volumes_from:
        - app

App contains php-fpm on port 9000 and my application code. Web is nginx with a few bits of config.

This functions how I would expect it to however in order to connect nginx to php-fpm I have this line:

fastcgi_pass    app:9000;

How can I effectively scale this? If I wanted, for example, to have one nginx container running but three app containers running then I'm going to surely have three php-fpm instances all trying to listen on port 9000.

How can I have each php-fpm instance on a different port but still know where they are in my nginx config at any given time?

Am I taking the wrong approach?

Thanks!

One solution is to add additional php-fpm instances to your docker-compose file and then use an nginx upstream as mentioned in the other answers to load-balance between them. This is done in this example docker-compose repo: https://github.com/iamyojimbo/docker-nginx-php-fpm/blob/master/nginx/nginx.conf#L137

upstream php {
    #If there's no directive here, then use round_robin.
    #least_conn;
    server dockernginxphpfpm_php1_1:9000;
    server dockernginxphpfpm_php2_1:9000;
    server dockernginxphpfpm_php3_1:9000;
}

This isn't really ideal because it will require changing the nginx config and docker-compose.yml when you want to scale up or down.

Note that the 9000 port is internal to the container and not your actual host, so it doesn't matter that you have multiple php-fpm containers on port 9000.

Docker acquired Tutum this fall. They have a solution that combines a HAProxy container with their api to automatically adjust the load-balancer config to the running containers it is load-balancing. That is a nice solution. Then nginx points to the hostname assigned to the load-balancer. Perhaps Docker will further integrate this type of solution into their tools following the Tutum acquisition. There is an article about it here: https://web.archive.org/web/20160628133445/https://support.tutum.co/support/solutions/articles/5000050235-load-balancing-a-web-service

Tutum is currently a paid service. Rancher is an open-source project that provides a similar load-balancing feature. They also have a "rancher-compose.yml" which can define the load-balancing and scaling of the services setup in the docker-compose.yml. http://rancher.com/the-magical-moment-when-container-load-balancing-meets-service-discovery/ http://docs.rancher.com/rancher/concepts/#load-balancer

UPDATE 2017/03/06: I've used a project called interlock that works with Docker to automatically update the nginx config and restart it. Also see @iwaseatenbyagrue's answer which has additional approaches.

You can use an upstream to define multiple backends, as described here:

https://stackoverflow.com/questions/5467921/how-to-use-fastcgi-next-upstream-in-nginx

You'd also want to have the config updated whenever new backends die/come into service with something like:

https://github.com/kelseyhightower/confd

Although this post is from 2015 and I feel I am necroing (sorry community), I feel like it's valuable to add at this point in time:

Nowadays (and since Kubernetes was mentioned) when you're working with Docker you can use Kubernetes or Docker Swarm very easily to solve this problem. Both orchestrators will take in your docker nodes (one node = one server with Docker on it) and you can deploy services to them and they will manage port challenges for you using overlay networks.

As I am more versed in Docker Swarm, this is how you would do it to approach this problem (assuming you have a single Docker node):

Initialize the swarm:

docker swarm init

cd into your project root

cd some/project/root

create a swarm stack from your docker-compose.yml (instead of using docker-compose):

docker stack deploy -c docker-compose.yml myApp

This will create a docker swarm service stack called "myApp" and will manage the ports for you. This means: You only have to add one "port: 9000:9000" definition to your php-fpm service in your docker-compose file and then you can scale up the php-fpm service, say to 3 instances, while the swarm will auto-magically load-balance the requests between the three instances without any further work needed.

In the case where your Nginx and php-fpm containers are on the same host, you can configure a small dnsmasq instance on the host to be used by Nginx container, and run a script to automatically update the DNS record when the container's IP address has changed.

I've written a small script to do this (pasted below), which automatically update DNS record which has the same name as the containers' name and points them to the containers' IP addresses:

#!/bin/bash

# 10 seconds interval time by default
INTERVAL=${INTERVAL:-10}

# dnsmasq config directory
DNSMASQ_CONFIG=${DNSMASQ_CONFIG:-.}

# commands used in this script
DOCKER=${DOCKER:-docker}
SLEEP=${SLEEP:-sleep}
TAIL=${TAIL:-tail}

declare -A service_map

while true
do
    changed=false
    while read line
    do
        name=${line##* }
        ip=$(${DOCKER} inspect --format '{{.NetworkSettings.IPAddress}}' $name)
        if [ -z ${service_map[$name]} ] || [ ${service_map[$name]} != $ip ] # IP addr changed
        then
            service_map[$name]=$ip
            # write to file
            echo $name has a new IP Address $ip >&2
            echo "host-record=$name,$ip"  > "${DNSMASQ_CONFIG}/docker-$name"
            changed=true
        fi
    done < <(${DOCKER} ps | ${TAIL} -n +2)

    # a change of IP address occured, restart dnsmasq
    if [ $changed = true ]
    then
        systemctl restart dnsmasq
    fi

    ${SLEEP} $INTERVAL
done

Then, start your nginx container with --dns host-ip-address, where host-ip-address is the IP address of the host on the interface docker0.

Your Nginx configuration should resolve names dynamically:

server {
  resolver host-ip-address;
  listen 80;
  server_name @server_name@;
  root /var/www/@root@;
  index index.html index.htm index.php;

  location ~ ^(.+?\.php)(/.*)?$ {
    try_files $uri =404;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$1;
    set $backend "@fastcgi_server@";
    fastcgi_pass $backend;
  }
}

References:

If your nginx and php-fpm are on different hosts, you can try @smaj's answer.

Another approach might be to look into something like consul-template.

And of course, at some point, Kubernetes may need to be mentionned.

However, you could consider a slightly more 'bits of string and duct tape' approach by looking at what consuming docker events could do for you (run docker events --since 0 for a quick sample).

It would be reasonably trivial to have a script looking at these events (bearing in mind there are several client packages available, including for python, go, etc), amending a config file, and reloading nginx (i.e. using the consul-template approach, but without the need for consul).

To go back to your original premise, though: so long as your php-fpm containers are started with their own network (i.e. not sharing that of another container, such as the nginx one), then you can have as many containers listening on port 9000 as you want - as they have per-container IPs, there is no issue with ports 'clashing'.

How you scale this will likely depend on what your ultimate goal/use-case is, but one thing you might consider is placing HAproxy between nginx and your php-fpm nodes. One thing this could allow you to do is simply nominate a range (and possibly create a docker network) for your php-fpm servers (i.e. 172.18.0.0/24), and have HAproxy configured to try and use any IP within that range as a backend. Since HAproxy has health checks, it could quickly identify which addresses are live, and make use of them.

See https://stackoverflow.com/questions/1358198/nginx-removing-upstream-servers-from-pool for a discussion about how nginx vs haproxy deals with upstreams.

Unless you were using a dedicated docker network for this, you might need to do some manual IP management for your php-fpm nodes.