docker-machine memory allocation

We have a fairly complex Rails app that is about to be deployed onto a single physical host. The host has 8 cores and 128GB ram.

The app is dockerised, with 4 types of containers

  • Rails inc. web server
  • Postgres DB
  • Redis DB
  • Worker container (Resque)

It is expected the Rails and Worker containers will be scaled by bringing on more containers within the docker-machine.

In the development environment, memory is allocated to the entire docker-machine:

docker-machine create -d virtualbox --virtualbox-memory 8192 default

Is it possible to control how much memory individual containers are limited to?

For example allocate 16GB to Postgres, but limit Rails containers to 4GB. What sort of minimum memory should be allocated to the server host running docker-machine, and is this even possible?

EDIT

Related questions:

How to handle Docker memory management?

Docker + Apache, how does memory usage work?

EDIT 2

This answer https://serverfault.com/a/645888/210752 indicates that the containers will allocate memory as needed. This has not been my experience in the development environment (by default the docker-machine was allocated 2GB).

As mentionned in the answer to the question in your 2nd Edit, Containers are not like VMs in that you don't usually reserve memory for them as you would with Virtual Machines. Because they all run on the same OS, it can dispatch memory as needed to different processes, just as if they were not running in a container. That is to say, memory is pooled for all processes, regardless of containers.

What you did set in the above exemple with docker-machine was the 'virtual' host's total memory pool. In your production case, it'll be the whole 128 GB (unless you plan to also use docker-machine or VMs to segment it).

However, containers also are a great way to make use of the kernel's cgroups (control groups) features, which lets you configure resource management for a whole container system. This does not let you 'reserve' memory to a container, but instead you can set upper bounds to all your containers memory so one won't eat up memory that could be used for others (in the event of a leak or bug for example).

With Docker, depending on the used container backend, you can set basic memory limits as follow:

  • When running docker's default libcontainer, by running the containers with the -m or -memory option
  • When running the legacy LXC provider, by running the containers with the LXC option lxc.cgroup.cpuset.memory=amount using --lxc-conf

You can find more information about cgroups usage in Docker here: https://www.cloudsigma.com/manage-docker-resources-with-cgroups/

The article also contains slides of an introduction to cgroups functionalities.

@ardochhigh Well, I can’t retract my close vote, or answer your question, so… sorry about that. Hopefully someone else can.

@HopelessN00b no problem … I think the question was not worded so well at the start. I’ve worked with infrastructure a lot but Docker is new to me. I’ve put up a bounty … have a great day!