Interesting:
Docs is the result of a joint effort lead by the French 🇫🇷🥖(DINUM) and German 🇩🇪🥨 governments (ZenDiS). We are always looking for new public partners (we are currently onboarding the Netherlands 🇳🇱🧀). Feel free to reach out if you are interested in using or contributing to docs.
https://docs.numerique.gouv.fr/login/
#docs #opensource #eu #europe #france #germany #netherlands #collaboration #fosdem
Just checked the part about self-hosting. While it’s probably possible to handle things with a less heavy approach, their only “easy to use” example right now is to have a full-blown kubernetes cluster at hand or run locally in the source directory. That’s a bit much.
In the README there’s also instructions for Docker Compose, although it’s quite the compose file, with SIXTEEN containers defined. Not something I’d want to self-host.
Honestly, k8s is super easy and very lightweight to run locally if you know the rights tools. There are a few good options but I prefer k3d. I can install Docker/k3d and also build a local cluster running in maybe 2 minutes. It’s excellent for local dev. Even good for production in some niche scenarios
I don’t like the approach of piling more things on top of even more things to achieve the same goal as the base, frankly speaking. A “local” kubernetes cluster serve no purpose other than incredible complexity for little to no gain over a mere docker-compose. And a small cluster would work equally well with docker swarm.
A service, even made of multiple parts, should always be described that way. It’s easy to move “up” the stack of complexity, if you so desire. Having “have a k8s cluster with helm” working as the base requirement sounds insane to me.
Honestly, a lot of the time I don’t understand why a lot of businesses use k8s.
At my company especially, we know almost exactly what our traffic will look like from 9am-5pm. We don’t really need flexible scaling, yet we still use it because the technology is hyped. Similar to cloud, we certainly don’t need to be spending as much as we do, but since everyone else is on or migrating to the cloud, we are as well.
The “problem” with k8s is not that it’s abstract-y (it’s not inherently any more abstract than docker), it’s that it’s very complex and enterprise-y.
The need for such a complex orchestration layer is not necessarily immediately obvious, until you’ve worked on a complex infra setup that wasn’t deployed with kubernetes. Believe me when you’ve seen the depths of hell that are hundreds of separately configured customer setups using thousands of lines of ansible playbooks, all using ad-hoc systems for creating containers/VMs, with even more ad-hoc and hacked together development and staging environments, suddenly k8s starts looking very appetizing. Instead of an abominable spaghetti of bash scripts, playbooks, and random documentation, one common (albeit complex) set of tools understood by every professional which manages your application deployment & configuration, redundancy, software upgrades, firewall configs, etc.
A small self-hosted production kubernetes cluster doesn’t have to be hard to operate or significantly more expensive than bare-metal; you can buy 3U of rack space, plop in 3 semi-large servers (think 128 GB plus a few TB of SSD RAID), install rancher and longhorn, and now you’ve got a prod cluster large enough for nearly every workload such that if you ever need to upgrade that means you have so many customers that hiring a k8s administrator will be a no-brainer.
Or you can buy minutes from AWS because CapEx is the absolute devil and instead you pay several times as much in OpEx to make it someone else’s problem. But if you’re doing that then you’re not comparing against “installing things the old-fashioned way”.
I personally haven’t rolled a k8s or k3s cluster, so it’s always felt a bit abstract to me. I probably should though, to demystify it to myself in my work environment.
Complex is definitely what I have noticed when I see my devops team PR into the ingress directories.
I guess the abstract issue I see, that ties in to the meme i shared above, is that sometimes around deploys where we get blips of 503/4’s and we appear to be unable to track them down. Is it the load balancer? Ingress? Kong? The fact that there is so many layers make infra issues rough to debug
Kubernetes is not really meant primarily for scaling. Even kubernetes clusters require autoscaling groups on nodes to support it, for example, or horizontal pod autoscalers, but they are minor features.
The benefits are pooling computing resources and creating effectively a private cloud. Easy replication of applications in case of hardware failure. Single language to deploy applications, network controls, etc.
Yea I’m not a fan of helm either. In fact, I avoid charts when possible. But kustomize is great.
I feel the same way about docker compose. If it wasn’t already obvious, I’m biased in favor of k8s. I like and prefer that interface. But that’s just preference. If you like docker compose, great!
There’s one point where I do disagree however. There are scenarios where a local k8s cluster has a good and clear purpose. If your production environment runs on k8s, then it’s best to mirror that locally as much as possible. In fact, there are many apps that even require a k8s api to run. Plus, being able to destroy and rebuild your entire k8s cluster in 30s is wonderful for local testing.
I won’t argue with the ups and downs of each technos, but I recently looked into docker swarms and it was all I expected kubernetes to be, without the hassle. And I could also get a full cluster with services restored from scratch in 30s. But I am obviously biased towards it, too :)
Did not realize swarm was still a thing, not trying to be offensive here.
My best find was using traefik as a reverse proxy in docker (compose). It is easily configurable through container labels and pulls resource definitions straight from docker. It is awesome!
There are many reasons to use k8s. Managing multiple nodes is one good one. But more importantly, k8s gives you an api-driven runtime environment. It’s really not comparable to docker compose.
Seconding k3d (and, by extension, k3s). If you’re in a market for sth suitable for more upstream-compliant clustering solution (k3s uses SQLite instead of etcd, iirc), RKE2 is also a great choice
Just checked the part about self-hosting. While it’s probably possible to handle things with a less heavy approach, their only “easy to use” example right now is to have a full-blown kubernetes cluster at hand or run locally in the source directory. That’s a bit much.
In the README there’s also instructions for Docker Compose, although it’s quite the compose file, with SIXTEEN containers defined. Not something I’d want to self-host.
it seems to contains development containers and external services containers. So the compose file is more for local dev it seems
What i do find weird is the choice for Django for the backend. Python is incredibly slow, and django rest framework is even worse.
Honestly, k8s is super easy and very lightweight to run locally if you know the rights tools. There are a few good options but I prefer k3d. I can install Docker/k3d and also build a local cluster running in maybe 2 minutes. It’s excellent for local dev. Even good for production in some niche scenarios
I don’t like the approach of piling more things on top of even more things to achieve the same goal as the base, frankly speaking. A “local” kubernetes cluster serve no purpose other than incredible complexity for little to no gain over a mere docker-compose. And a small cluster would work equally well with docker swarm.
A service, even made of multiple parts, should always be described that way. It’s easy to move “up” the stack of complexity, if you so desire. Having “have a k8s cluster with helm” working as the base requirement sounds insane to me.
Honestly, a lot of the time I don’t understand why a lot of businesses use k8s.
At my company especially, we know almost exactly what our traffic will look like from 9am-5pm. We don’t really need flexible scaling, yet we still use it because the technology is hyped. Similar to cloud, we certainly don’t need to be spending as much as we do, but since everyone else is on or migrating to the cloud, we are as well.
The “problem” with k8s is not that it’s abstract-y (it’s not inherently any more abstract than docker), it’s that it’s very complex and enterprise-y.
The need for such a complex orchestration layer is not necessarily immediately obvious, until you’ve worked on a complex infra setup that wasn’t deployed with kubernetes. Believe me when you’ve seen the depths of hell that are hundreds of separately configured customer setups using thousands of lines of ansible playbooks, all using ad-hoc systems for creating containers/VMs, with even more ad-hoc and hacked together development and staging environments, suddenly k8s starts looking very appetizing. Instead of an abominable spaghetti of bash scripts, playbooks, and random documentation, one common (albeit complex) set of tools understood by every professional which manages your application deployment & configuration, redundancy, software upgrades, firewall configs, etc.
A small self-hosted production kubernetes cluster doesn’t have to be hard to operate or significantly more expensive than bare-metal; you can buy 3U of rack space, plop in 3 semi-large servers (think 128 GB plus a few TB of SSD RAID), install rancher and longhorn, and now you’ve got a prod cluster large enough for nearly every workload such that if you ever need to upgrade that means you have so many customers that hiring a k8s administrator will be a no-brainer.
Or you can buy minutes from AWS because CapEx is the absolute devil and instead you pay several times as much in OpEx to make it someone else’s problem. But if you’re doing that then you’re not comparing against “installing things the old-fashioned way”.
Thanks for the response!
I personally haven’t rolled a k8s or k3s cluster, so it’s always felt a bit abstract to me. I probably should though, to demystify it to myself in my work environment.
Complex is definitely what I have noticed when I see my devops team PR into the ingress directories.
I guess the abstract issue I see, that ties in to the meme i shared above, is that sometimes around deploys where we get blips of 503/4’s and we appear to be unable to track them down. Is it the load balancer? Ingress? Kong? The fact that there is so many layers make infra issues rough to debug
Kubernetes is not really meant primarily for scaling. Even kubernetes clusters require autoscaling groups on nodes to support it, for example, or horizontal pod autoscalers, but they are minor features.
The benefits are pooling computing resources and creating effectively a private cloud. Easy replication of applications in case of hardware failure. Single language to deploy applications, network controls, etc.
Yea I’m not a fan of helm either. In fact, I avoid charts when possible. But kustomize is great.
I feel the same way about docker compose. If it wasn’t already obvious, I’m biased in favor of k8s. I like and prefer that interface. But that’s just preference. If you like docker compose, great!
There’s one point where I do disagree however. There are scenarios where a local k8s cluster has a good and clear purpose. If your production environment runs on k8s, then it’s best to mirror that locally as much as possible. In fact, there are many apps that even require a k8s api to run. Plus, being able to destroy and rebuild your entire k8s cluster in 30s is wonderful for local testing.
Edit: typos
I won’t argue with the ups and downs of each technos, but I recently looked into docker swarms and it was all I expected kubernetes to be, without the hassle. And I could also get a full cluster with services restored from scratch in 30s. But I am obviously biased towards it, too :)
Did not realize swarm was still a thing, not trying to be offensive here.
My best find was using traefik as a reverse proxy in docker (compose). It is easily configurable through container labels and pulls resource definitions straight from docker. It is awesome!
k8s is overkill for a lot of homelabs. Using docker compose is a fraction of that complexity
Yes if single node, kinda if 2-3 nodes, no for anything above that IMHO.
There are many reasons to use k8s. Managing multiple nodes is one good one. But more importantly, k8s gives you an api-driven runtime environment. It’s really not comparable to docker compose.
Seconding k3d (and, by extension, k3s). If you’re in a market for sth suitable for more upstream-compliant clustering solution (k3s uses SQLite instead of etcd, iirc), RKE2 is also a great choice
Please develop this self hosted version using sandstorm
It makes hosting a breeze with one click installation