That’s a question I always asked myself.
Currently, I’m running Debian on both my servers, but I consider switching to Fedora Atomic Core (CoreOS), since I already use Fedora Atomic on my desktop and feel very comfortable with it.
There’s always the mentality of using a “stable” host OS bein better due to following reasons:
- Things not changing means less maintenance, and nothing will break compatibility all of the sudden.
- Less chance to break.
- Services are up to date anyway, since they are usually containerized (e.g. Docker).
- And, for Debian especially, there’s one of the biggest availability of services and documentation, since it’s THE server OS.
My question is, how much of these pro-arguments will I loose when I switch to something less stable (more regular updates), in my case, Fedora Atomic?
My pro-arguments in general for it would be:
- The host OS image is very minimal, and I think most core packages should be running very reliably. And, in the worst case, if something breaks, I can always roll back. Even the, in comparison to the server image, “bloated” desktop OS (Silverblue) had been running extremely reliably and pretty much bug free in the past.
- I can always use Podman/ Toolbx for example for running services that were made for Debian, and for everything else there’s Docker and more. So, the software availability shouldn’t be an issue.
- I feel relatively comfortable using containers, and think especially the security benefits sound promising.
Cons:
- I don’t have much experience. Everything I do related to my servers, e.g. getting a new service running, troubleshooting, etc., is hard for me.
- Because of that, I often don’t have “workarounds” (e.g. using Toolbx instead of installing something on the host directly) in my mind, due to the lack of experience.
- Distros other than Debian and some others aren’t the standard, and therefore, documentation and availability isn’t as good.
- Containerization adds another layer of abstraction. For example, if my webcam doesn’t work, is it because of a missing driver, Docker, the service, the cable not being plugged in, or something entirely different? Troubleshooting would get harder that way.
On my “proper” server I mainly use Nextcloud, installed as Docker image.
My Raspberry Pi on the other hand is only used as print server, running Octoprint for my 3D-printer. I have installed Octoprint there in the form of Octopi, which is a Raspian fork distro where Octoprint is pre-installed, which is the recommended way.
With my “proper” server, I’m not really unhappy with Debian. It works and the server is running 24/7. I don’t plan to change it for the time being.
Regarding the Raspi especially, it looks quite a bit different. I think I will just try it and see if I like it.
Why?
- It is running only rarely. Most of the time, the device is powered off. I only power it on a few times per month when I want to print something. This is actually pretty good, since the OS needs to reboot to apply updates, and it updates itself automatically, so I don’t have to SSH into it from time to time, reducing maintenence.
- And, last but not least, I’ve lost my password. I can’t log in anymore and am not able to update anymore, so I have to reinstall anyway.
What is your opinion about that?
Podman runs without a daemon which for some reason makes
podman compose
an a bit tricky replacement fordocker compose
.But for a single purpose, why not just install nextcloud as a system package via layering? I think that should be pretty secure through SELinux and would be the easiest choice.
Other problems with coreOS:
its not that hard
But I would honestly try it. Maybe give secureblue server a try, should be more similar to your desktop than coreOS (which seems to be made for wide deployments)