Is it possible to control how much memory individual containers are limited to?
For example allocate 16GB to Postgres, but limit Rails containers to 4GB.
What sort of minimum memory should be allocated to the server host running docker-machine, and is this even possible?
This answer https://serverfault.com/a/645888/210752 indicates that the containers will allocate memory as needed. This has not been my experience in the development environment (by default the docker-machine was allocated 2GB).
As mentionned in the answer to the question in your 2nd Edit, Containers are not like VMs in that you don't usually reserve memory for them as you would with Virtual Machines. Because they all run on the same OS, it can dispatch memory as needed to different processes, just as if they were not running in a container. That is to say, memory is pooled for all processes, regardless of containers.
What you did set in the above exemple with docker-machine was the 'virtual' host's total memory pool. In your production case, it'll be the whole 128 GB (unless you plan to also use docker-machine or VMs to segment it).
However, containers also are a great way to make use of the kernel's cgroups (control groups) features, which lets you configure resource management for a whole container system. This does not let you 'reserve' memory to a container, but instead you can set upper bounds to all your containers memory so one won't eat up memory that could be used for others (in the event of a leak or bug for example).
With Docker, depending on the used container backend, you can set basic memory limits as follow:
When running docker's default libcontainer, by running the containers with the -m or -memory option
When running the legacy LXC provider, by running the containers with the LXC option lxc.cgroup.cpuset.memory=amount using --lxc-conf
@HopelessN00b no problem … I think the question was not worded so well at the start. I’ve worked with infrastructure a lot but Docker is new to me. I’ve put up a bounty … have a great day!