I can’t seem to wrap my head around (Docker) containers and especially their maintenance.
As I understand it, containers contain a stripped-down OS that shares some resources with the host?
Or is it more like a closed-off part of the file system?
Anyway, when I have several containers running on a host system,
Do I need to keep them all updated separately? If so, how?
Or is it enough to update the host system, and not worry about the containers?
It’s built on the shipping container parallel. In order to transport objects you obfuscate anything not required for shipping a container.
- What’s inside the container doesn’t matter. The container has everything it needs to run because the ship/host is responsible for the overhead.
- containers move. Containers are setup to run by themselves, so you can move it from one ship to another. This means you can use your container doesn’t care if it’s in the cloud or a shipping vessel
- As soon as you open a container your stuff is there. It’s very easy to onboard.
- Most importantly though, your shipping container isn’t a full boat by itself. It lives in a sandbox and only borrows the resources it needs like the hosts CPU or the boats ability to float. This makes it easier to manage and stack because it’s more flexible
Love the container analogy - immediately made so much sense to me! Also clarifies some misunderstandings I had.
I was mucking about with docker for a Plex server over the weekend and couldn’t figure out what exactly docker was doing. All I knew was that it’d make plex ‘sandboxed’, but I then realised it also had access to stuff outside the container.
This is their logo:
The whole container on a ship idea is their entire premise. The ship (docker) is a unified application/os layer to the host, in that containers can work plug-n-play with the docker base layer.
On a very specific note, I don’t run my Plex server in a container. I have a docker compose setup with 20+ apps, but Plex is on the bare metal OS because it’s kinda finicky and doesn’t like nas. You also need to setup the Plex API to claim the server as the container name changes. This is my stock Plex config if it helps
plex: image: lscr.io/linuxserver/plex:latest container_name: plex network_mode: host environment: - PUID=1000 - PGID=1000 - TZ=Etc/GMT - VERSION=docker - PLEX_CLAIM= #optional volumes: - /home/null/docker/plex/:/config - /x:/x - /y:/y - /z:/z restart: unless-stopped
Docker is essentially a security construct.
The idea is that the process inside the container, like say MySQL, Python or Django, runs as a process on your machine in such a way that it can only access parts of the system and the world that it’s explicitly been granted access to.
If you naively attempted this, you’d run into a big problem immediately. Namely that a program needs access to libraries. So you need to grant access to those. Libraries might be specific to the program, or they might be system libraries like libc.
One way is to explicitly enumerate each required library, but then you’d need to install those for each such process, which is inconvenient and a security nightmare.
Instead you package the libraries and the program together in a package called a Docker image.
To simplfy things, at some point it’s simpler to start with a minimal set of known files, like say Alpine, Debian, or Fedora.
This basically means that you’re downloading a bunch of stuff to make the program run and thus is born the typical Docker image. If you look at the Python image, you’d see that it’s based on some other image. Similarly, a Django image is based on a Python image. It’s the FROM line in a Dockerfile.
A container is such an image actually running the isolated process, again, like say MySQL.
Adding information to that process happens in a controlled way.
You can use an API that the process uses, like say a MySQL client. You can also choose to include the data in the original image, or you can use a designated directory structure that’s visible to both you and the process, this is called a volume.
To run something like a Django application would require that Python has access to the files, which can be included in the image by using a custom Dockerfile, or it can be accessed y the container whilst it’s running, using a volume.
It gets more interesting when you have two programs needing access to the same files, like say nginx and python. You can create shared volumes to deal with this.
Ultimately, Docker is about security and making it convenient to implement and use.
Source: I use Docker every day.
I also dont get how to update docker containers and where to save config files. The idea is that the containers are stateless so they can be recreated whenever you like.
But there are no automatic updates?? You need a random “watchtower” container that does that.
Also, they are supposed to give easy security, buf NGINX runs as root? There is a rootless variant
No automatic updates is a feature not a bug.
(Not an expert, but use it some) Configs: most of the time you mount a directory that’s specifically set up for (that/a) container, and that’s persistent on the host. When you spin up its replacement, it has the same mapping.
Automatic updates - from what I remember, yeah, you can even just (depending on needed uptime) schedule a cron job to pull the new image, kill the existing, and start up the new, and if it doesn’t start then you roll back to the previous.
Security - there used to be a debate over it (don’t remember current SOTA) in theory both are pretty safe but the rootless gives more security with some tradeoffs.
Okay mounting a directory for configs makes sense
Container update is not Docker’s concern. Docker is proxy between your Container and OS. So if you want keep your container up to date you need external process. Can be achieved with container orchestration tool like Kubernates
I would highly recommend using docker compose files. The services you are after usually have them in their installation instructions, on github or docker hub (the latter tells you how many image pulls so you can see what most people are using). Also check out https://awesome-docker-compose.com/apps and https://haxxnet.github.io/Compose-Examples/.
Then think of each compose file as a separate service that functions completely independently and can’t access any others unless you open a port to the host system (ports: ) or have a common network (networks:). The container cannot access or save files unless you open volumes (volumes: ). Personally I have separate folders for each service, and always persist and store config, data and db files in a subfolder of that so it’s all in one place. It’s easier to migrate or save your info if something goes wrong, and males backups easier to manage.
In the composer file there is image: <image place/image>:<tag> The tag could be ‘latest’ or a specific version you can look up on docker hub by searching for that image and looking a the tags that are near the ‘latest’ tag or have the same file size. For critical services use a specific version, and for non critical use latest.
To update a docker compose file, go to the folder, update the version of the image (e.g :15.6 to :16.1) or if using the ‘latest’ tag no need to change anything. Then run “docker compose down && docker compose pull && docker compose up -d” to update the services top the latest image.
I use wud https://github.com/getwud/wud about once a week to highlight any available updates then manually update them one by one, and before doing so looking at the update notes to see if there are any breaking changes and testing the services after. I used to just use latest and blindly update but have had occasional issues like bad updates or having to figure out breaking changes. If it goes wrong you can just go back to the old version while you investigate more.
Also, docker keeps old images forever unless you prune them so lookup ‘docker image prune’ or ‘docker system prune’ before trying them as they’ll remove a lot.