Is it possible to use Docker to separate web sites for users?

I manage servers where users have their own websites on it that can be accessed by FTP (like an hosting company) and instead of working on isolating LAMP stack processes, I was wondering if it was possible to implement Docker and use an images per website.

From what I understand, you can expose Docker instance via their ports, so if you run two docker instance on the same server, you'll have to expose two different ports.

But is it possible to export not ports, but server name, like :

  • www.somewebsite.com : Docker instance 1
  • www.otherwebsite.com : Docker instance 2
  • www.etc.com : Docker instance ...

And that, in the same server.

I thought about installing only Apache on the server, that would redirect request to the dedicated Docker instance based on the server name, but then I would have to install Apache (again !) andMySQL on any Docker instances.

Is this possible and moreover, is this interesting in term of performance (or not at all)?

Thank you for your help.

Yes, it is possible. What you need to do is providing several 80 ports. one for each URLs. You can do this using, e.g. Virtual Host of Apache running on Docker host server.

  1. Set DNS CNAME.
  2. Run docker instances and map their port 80 to port, say, 12345~12347 of the docker host.
  3. Run Apache server on docker host and set a Virtual Host for each URL and set ProxyPass and ProxyPassReverse to localhost:12345 which is one of your docker instances.

Apache config file will look like this:

<VirtualHost *:80>
ServerName www.somewebsite.com
  <Proxy *>
    Allow from localhost
  </Proxy>
  ProxyPass        / http://local.hostname.ofDockerHost:12345/
  ProxyPassReverse / http://local.hostname.ofDockerHost:12345/
</VirtualHost>

I know this has already been answered however I wanted to take it one step further and show you an example of how this could be done, to provide a more complete answer.

Please see my docker image here with instructions on how to use it, this will show you how to configure two siteshttps://hub.docker.com/r/vect0r/httpd-proxy/

As jihun said you will have to make sure you have your vhost configuration set. My example uses port 80 to display a test site example.com and 81 to display test site example2.com. Also important to note that you will need to specify your content and expose required ports in your Dockerfile, like such;

FROM centos:latest
Maintainer vect0r
LABEL Vendor="CentOS"

RUN yum -y update && yum clean all
RUN yum -y install httpd && yum clean all

EXPOSE 80 81

#Simple startup script to aviod some issues observed with container restart
ADD run-httpd.sh /run-httpd.sh
RUN chmod -v +x /run-httpd.sh

#Copy config file across
COPY ./httpd.conf /etc/httpd/conf/httpd.conf
COPY ./example.com /var/www/example.com
COPY ./example2.com /var/www/example2.com
COPY ./sites-available /etc/httpd/sites-available
COPY ./sites-enabled /etc/httpd/sites-enabled

CMD ["/run-httpd.sh"]

Hope this helps explain the process a little be more. Please feel free to ask me any further questions on this, happy to help.

Regards,

V

It is possible. You may use apache (or better yet, haproxy, nginx or varnish, that may be more efficient than apache for just that redirection task) in the main server, to redirect to the apache ports of each container.

But, depending on the sites you run there (and their apache configurations), it may require far more memory than using a single central apache with virtualhosts, specially if you have modules (i.e. php) that require a lot of RAM.

In my case I needed to add SSLProxyEngine On, ProxyPreserveHost On and RequestHeader set Front-End-Https "On" to my apache 2.4 vhost file, because I wanted to enable SSL on the docker container. About the local.hostname.ofDockerHost, in my case the name of the host server running the docker container was lucas, and the port mapped to port 443 of the docker container was 1443 (because port 443 was already in use by apache in the host server), so that line ended up this way https://lucas:1443/

This is the final setup, and it's working just fine!

<VirtualHost *:443> # Change to *:80 if no https required
    ServerName www.somewebsite.com
    <Proxy *>
        Allow from localhost
    </Proxy>
    SSLProxyEngine On # Comment this out if no https required
    RequestHeader set Front-End-Https "On" # Comment this out if no https required
    ProxyPreserveHost    On
    ProxyPass        / http://local.hostname.ofDockerHost:12345/
    ProxyPassReverse / http://local.hostname.ofDockerHost:12345/
</VirtualHost>

Finally, in the docker container I had have to setup proxy SSL headers. In my case, the container was running nginx and something called omnibus for setting up ruby apps. I think this can be setup in a nginx config file as well. Will write it down as is just in case someone find this helpful

nginx['redirect_http_to_https'] = true
nginx['proxy_set_headers'] = {
    "Host" => "$http_host",
    "X-Real-IP" => "$remote_addr",
    "X-Forwarded-For" => "$proxy_add_x_forwarded_for",
    "X-Forwarded-Proto" => "https",
    "X-Forwarded-Ssl" => "on"
}
nginx['real_ip_trusted_addresses'] = ['10.0.0.77'] # IP for lucas host
nginx['real_ip_header'] = 'X-Real-IP'
nginx['real_ip_recursive'] = 'on'

Complete guide for apache, ISP Config, Ubuntu server 16.04 here https://www.howtoforge.com/community/threads/subdomain-or-subfolder-route-requests-to-running-docker-image.73845/#post-347744

Theoretically it is possible, Apache would do a ProxyPass towards the port each Docker instance is listening.