Docker: Run Cronjob for different Container

I am looking for best practices about running cronjobs for my php fpm container.

Right now running:

  • NGINX Container
  • PHP FPM Container
  • MySQL Container

I now would love to have another Container running called "Cronjob Container" who exec`s a script within my PHP FPM Container ( I need some dependencies of PHP ).

So three possible solutions:

1.) Running an own container

I would love to use this solution!

It would be good to have a container running CRON where I am able to ( somehow ) call docker exec on my php fpm container... Or have another way.

2.) Running CRON inside of PHP Container

This would be okay, but is not best practice. I could start a second process inside my php fpm container running cron. This would work but I am not sure if this is who you should work with docker.

3.) Running Hosts Cron

This would be cruel. I would need to find the processID and containerID of a given path and then run docker exec. But this is more or less my last way... And I hate to manage cronjobs without of deployment.

So what is the best approach here?

Have a nice day,

Bastian

I've written a daemon that observes containers and schedules jobs, defined in their metadata, on them. This comes closest to your 1) solution. Example:

version: '2'

services:
  wordpress:
    image: wordpress
  mysql:
    image: mariadb
    volumes:
      - ./database_dumps:/dumps
    labels:
      deck-chores.dump.command: sh -c "mysqldump --all-databases > /dumps/dump-$$(date -Idate)"
      deck-chores.dump.interval: daily

'Classic', cron-like configuration is also possible.

Here are the docs, here's the image repository.

Cron itself can be installed and run in the foreground (cron -f) making it very easy to install in a container. To access other containers, you'd likely install docker in the same container for the client CLI (not to run the daemon). Then to access the host docker environment, the most common solution is to bind mount the docker socket (-v /var/run/docker.sock:/var/run/docker.sock). The only gotcha is that you need to setup the docker gid inside your container to match the host gid, and then add users inside the container to the docker group.

This does mean that those users have the same access of any docker user on the host, e.g. root level access, so you need to either fully trust the user submitting them, or limit the commands they can run with some kind of sudo equivalent. The other downside is that this is less portable, and security aware admins will be unlikely to approve running your containers on their systems.

The fallback to option B is very easy with a tool like supervisord. While less than the ideal "one process per container", it's not quite an anti-pattern either since it keeps your entire container and dependencies together, and removes any security risks to the host.

Whether you go with the first or second option comes down to your environment, who's submitting the jobs, how many containers need to have jobs submitted against themselves, etc. If it's an admin submitting jobs against lots of a containers, then a cron container makes sense. But if you're the application developer that needs to include a scheduled job with your app as a package, go for the second option.

Run cron in another container or even on the host but run the script via php-fpm (eg. the cron would "curl" or something the PHP script).

Make sure you secure such a setup with a security token, network limitations etc. An enhancement could be to have a separated php-fpm pool with dynamic processes that is able to spawn max one process. This pool would only be accessible by the cron. It could also benefit for it's own individual settings such a way bigger execution time, more or less memory etc.

P.S.: You can use something like this to call the script directly in the FPM container and not go through nginx.

Reasoning: You probably want to access same libraries, same configuration etc. Running a process randomly spawned and not controlled by a signal manager in Docker is a really bad idea.

I was trying to achieve something similar.

My initial idea was to start cron jobs from separate cron container and actually execute them within another container (php) ie. to have one crontab record for each docker run -i t $containerName $scriptName ... command running different script within the php container

The approach is not really good because of the disadvantages @BMitch also mentions. Also I don't really like to install docker to the container.

I would like to offer another solution fitting under your #1 category: one can run the php-fpm directly. Although it is not the most elegant solution in the world it offers advantages:

  1. Security - no special or privileged access, just use host and port (like php-host:9000) that is already open for nginx from within docker virtual network
  2. Having cron management separated from the php container - so scaling is not harmed
  3. Actually using cron for cronish tasks - just plant the crontab and be done instead of reimplementing the cron via various other libs
  4. The script execution doesn't go through nginx, so noone can run them directly via webserver, you don't need to implement any auth or such mechanisms
  5. Even less problems with permissions. My previous cron dockerization was having cron installed within another php container and having the codebase shared using volumes. That was efficient, but permissions had to be handled carefully as caches, DI, logs etc. had to be accessible and writable by both webserver and a cron user. This approach eliminates the issue

The only disadvantage I encountered so far is that the first line w/ hashbang (#!/usr/local/bin/php) is regarded as an actual output and PHP warning about headers already sent is emitted (Cannot modify header information - headers already sent by ...) - dropping the hashbang fixes this.

How to actually do it?

  1. Have a clean container, for example alpine:3.7
  2. Install apk-cron and fcgi (package info)
  3. Run something like:
SCRIPT_FILENAME=/docroot/scripts/cron/example-script.php \
REQUEST_METHOD=GET \
cgi-fcgi -bind -connect php-fpm:9000

from within the crontab.

More info on the topic: Directly connect to PHP-FPM

I did that ( with Docker Compose more or less automatically ) but how would you start some kind of PHP based Cronjob just with some kind of network command? If I use port 80 I have the normal timeout of PHP working against me that makes no sense… Not sure what would be a good solution!

Not that I understand or can test any of this, but there’s an example in that document that says you can run a command like this docker run --rm --name web2 --link db:db training/webapp commandname, so if that does work for you, run cron in one container, and have it issue commands like that on the other container(s).

#1! - Sorry I don’t really know. Isn’t the point of containers so that they are “contained”? Like, “secure”? I don’t use docker or application containers, so I don’t know exactly what I’m talking about though…

You are totally right they should be “contained”, but you need to have some of them working together as one piece of software. If you are not able to run your software as one process ( and to be honest as soon as you are not a pro in writing your own database from scratch you are not able to ) you need a few containers to work together. And I don’t know how to run Cronjobs with them =/.

Communicating between containers (linking or networking): Redirecting…