Automatically start docker container's linked dependencies

I run gitlab in a docker container and it separates its dependencies (MySQL, Redis, Mailserver) quite nicely into separate docker containers. Running them is not a problem, I start them in reverse order: the dependencies first, than gitlab itself.

From time to time I have to restart the docker host. Currently I ssh into the docker host and manually restart the containers. Is there a better way for it? Like just tell some service to start the gitlab container and it takes care of starting its dependencies first? I know I can create individual init scripts for each docker container, but that's not what I'm looking for.

You might even want to look into the 'official' Fig project, which has now been replaced by Docker Compose. It should be fairly easy to configure / setup.

Your use case of running gitlab is basically the same as the Fig - Wordpress example or by using the gitlab-compose script

And if you're working on a Mac, you might want to have a look at the Docker toolbox which includes Compose, but also various other tools for getting up and running quickly!

I think you can look at Decking

Also you can manage dependencies in a way CoreOS does it. By writing a Unit file for your main gitlab container like:

[Unit]
...
Requires=docker.service
Requires=redis.service
Requires=mysql.service
...
[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill gitlab
ExecStartPre=-/usr/bin/docker rm gitlab
ExecStart=/usr/bin/docker run --name gitlab gitlab
ExecStop=/usr/bin/docker stop gitlab

Where mysql.serice is Unit file for MySQL container, redis.service a Redis one, etc.

In case anyone finds this useful, I wrote a fish shell script (should be easily portable to bash) using docker inspect to start all dependencies of my containers. Here is the code, using jq to parse the json:

#!/usr/local/bin/fish

# Start all containers

# Returns all the dependencies of the input + the input, eg. [dep1, dep2, input]
function docker_links_lookup
    set result (docker inspect $argv[1] | jq ".[0].HostConfig.Links" | pcregrep -o1 "\"/(.*):.*\"")
    for x in $result 
        docker_links_lookup $x
        echo $x
    end
end

# Returns all docker containers in the current directory, including their dependencies
function all_docker_containers 
    for dir in */
        if test -f "$dir/Dockerfile"
            set container_name (echo $dir | sed "s/\///") #remove trailing /
            docker_links_lookup $container_name
            echo "$container_name"
        end
    end
end

# Take all docker containers and dependencies, filter out duplicates without changing the order (the awk command), then start the containers in that order
all_docker_containers | awk '!seen[$0]++' | xargs docker start

Note that this code assumes that there are subdirectories in the current directory which correspond to a docker container with the same name. It also doesn't deal with circular dependencies (I don't know if any of the other tools does), but it was also written in under half an hour. If you only have a single container you simply use the docker_links_lookup function like this:

docker_links_lookup {{container_name}} | awk '!seen[$0]++' | xargs docker start

Edit:

Another handy function I started using in the above script is this one:

# This starts the docker containers that are passed in, and waits on the ports they expose
function start_and_wait
    for container in $argv
        set ports (docker inspect $container | jq ".[0].Config.ExposedPorts | keys" 2>/dev/null | egrep -o "[0-9]+" | xargs)
        docker start $container
        docker run -e PORTS="$ports" --link $container:wait_for_this n3llyb0y/wait > /dev/null
    end
end

Instead of just starting a container, it looks up the ports that the container exposes, and tests to see if it can connect to them. Useful if you have things like a database container, which may perform cleanups when it is started and therefore take some time to actually be available on the network. Use it like this:

start_and_wait {{container_name}}

Or in case you are using the above script, replace the last line with this:

start_and_wait (all_docker_containers | awk '!seen[$0]++' | xargs -n 1)

This last line will make sure that all containers are only started after their dependencies, while also waiting for the dependencies to actually complete their startup. Note that this is probably not applicable to every setup, as some servers might open their ports right away, without actually being ready (although I don't know any servers that actually do this, but this is the reason the docker developers give when asking them about a feature like this).