I've just started to study Docker and there's something that's being quite confusing for me. As I've read on Docker's website a container is different from a virtual machine. As I understood a container is just a sandbox inside of which an entire isolated file system is run.
I've also read that a container doesn't have a Guest OS installed. Instead it relies on the underlying OS Kernel.
All of that is fine. What I'm confused is that there are Docker images named after operating systems. We see images like Ubuntu, Debian, Fedora, CentOS and so on.
My point is: what are those images, really? How is it different creating a container based on the Debian image than creating a Virtual Machine and installing Debian?
I thought containers had no Guest OS installed, but when we create images we base them on some image named after one OS.
Also, in examples I saw when we do docker run ubuntu echo "hello world",
it seems we are spinning up a VM with Ubuntu and making it run the command echo "hello world".
In the same way when we do docker run -it ubuntu /bin/bash, it seems we are spinning up a VM with Ubuntu and accessing it using command line.
Anyway, what are those images named after operating systems all about? How different is it to run a container with one of those images and spinning up a VM with the corresponding Guest OS?
Is the idea that we just share the kernel with the host OS (and consequently we have access to the underlying machine hardware resources, without the need to virtualize hardware), but still use the files and binaries of each different system on the containers in order to support whatever application we want to run?
Since all Linux distributions run the same (yup, it's a bit simplified) Linux kernel and differ only in userland software, it's pretty easy to simulate a different distribution environment - by just installing that userland software and pretending it's another distribution. Being specific, installing CentOS container inside Ubuntu OS will mean that you will get the userland from CentOS, while still running the same kernel, not even another kernel instance.
So lightweight virtualization is like having isolated compartments within same OS. Au contraire real virtualization is having another full-fledged OS inside host OS. That's why docker cannot run FreeBSD or Windows inside Linux.
If that would be easier, you can think docker is kind of very sophisticated and advanced chroot environment.
I was struggling with the same question that you're asking, and this is what I've come to understand.
Container don't have a guest OS, you're right about that.
Then why do we base the container on an OS image?
Because you'd want to use some commands like (apt, ls, cd, pwd).
These commands are calls to binary files which might available to you in your host OS without you installing anything.
In order for you to be able to run these commands inside your docker image you must have the binaries for them inside your image, because of isolation you don't just execute binaries from the host OS.
Containers run on single kernel. In other words all containers have single kernel (Host OS). Whereas on other hand hypervisors have multiple kernals. Each virtual machine runs on different kernel.
And "docker run ubuntu" is just like to creating chroot environment.
In my opinion, your objectives in virtualisation are the keys. If you need libraries, languages, etc. on OS, so OS containers are suitable with your need. But if your need is only application as components, it doesn’t necessary to use OS as your base image. I think this article could explain it clearly Operating System Containers vs. Application Containers - RisingStack Engineering