Let's say I have a docker host set up with 50 containers each running a site served by Apache.
As I understand it each container will have an Apache instance running and typically each Apache instance uses ~250mb per ram. Apache then requires a few mb per child process.
Am I correct in assuming each container will require the memory of a full Apache instance? Eg. the 50 sites would require 50 x ~300mb?
Or is Apache able to share some portions of memory between containers to improve memory efficiency?
Is Docker suitable for efficient "mass" hosting (eg. a large number of sites each requiring few resources) where each site is a container? Or would it only be feasible to have one Apache container serving the 50 sites?
docker provides isolation between apache instances, which may be interesting for many reasons (for example, if each website is administered by a different user), also allows easy relocation of instances on another server. If you don't need that you'd probably have better performance with just one instance of Apache.
Isolation means that the resource usage will be pretty similar to using virtual machines, except you don't pay the virtualization overhead, the memory partitioning overhead and the operating system overhead. That being said, Apache memory usage should be mostly dependent on the server load, hence you shouldn't expect to increase tenfold if you split a big server into many small ones. Also, as there is only one kernel, disk caches are shared between containers, so if your disk access pattern is similar between two instances you get a little performance boost.
If I'm understanding your question and the concern is about what memory is share-able between container instances, then the answer to that question for shared libraries is that it depends.
The second benefit is that copy-on-write and page sharing apply to all processes on the host regardless of containers. For example 1000 containers mapping the same file to memory (say, a library or executable file) will only use the memory space once. If they write to memory, only those pages that they write to will be copied. The caveat for this second benefit is that the filesystem needs to be aware that your containers are mapping the same file. Currently that is the case for the aufs storage driver (which operates at the filesystem layer) but not for the lvm/devicemapper driver (which operates at the block level and therefore doesn't benefit ). The zfs and btrfs drivers under development will also benefit from page caching. So I expect in a scenario where you run thousands of containers and they map large identical files to memory, currently the aufs driver would give you better memory utilization. But we haven't benchmarked this.