Three HP ProLiant servers running ProxMox cluster. Each box has a VM for Portaiber, as well as mismatch of VMs running Home Assistant OS, OpenWRT, Ubuntu, Windows and Debian, along with a Windows file server that connectes to four cheap NAS running Ubuntu LTS with a combined 20 mismatched hard drives by iSCSI and borgs them together with Storage Spaces.
Debian. It is rock solid. If software doesn't support Debian, chances are it supports something Debian based. You never have to worry about an update breaking your computer. It is the perfect "it just works" distro for a server.
Ubuntu LTS, with all my services in Docker containers.
I know Ubuntu gets a lot of (deserved) hate for some of the shit Canonical pulls, but for now, I like Ubuntu and it works for me.
When I rebuilt my server at the beginning of the month, I was gonna jump to Debian, but my god the Debian website is obtuse. After looking at the site and trying to determine what to download to get Debian with non-free (I’m unfortunately working with an NVIDIA card), I decided to go with Ubuntu. I needed a smooth rebuild process and with Ubuntu I know exactly what I’ll get when I download the LTS server.
I picked it for zfs. A lot of the ways things work seem cleaner and simpler than on Linux and zfs is awesome with the copy on write snapshots and filesystem compression and all that. I like rc.conf and pf is way nicer than iptables and even when you upgrade it automatically makes a snapshot so you can rollback.
Sometimes I do need to patch and compile things because people seem to not know freebsd exists but that's really the only downside.
I went for a much simpler approach lately as I downscaled my hardware for efficiency.
I run NixOS on the bare metal. It gives the system management a declarative approach, just like kubernetes would. On top of that, I run libvirt as a hypervisor. In other scenarios I'd use tinyvmm and cloud-hypervisor, but I found qemu way better for the variety of homelab workloads and libvirt is pretty straightforward.
Some vms have pci passthrough, e.g. my routeros vm gets a bunch of NICs directly, some have various funny network topology. Libvirt used to be a pain in that regard, but it's actually fine with NixOS because you manage both sides of the networking stack in declarative configuration.
I run NixOS on the vms too (now for the sake of easy upgrades), and I have a bit of a split between running services natively (systemd is very good about “containerizing” things nowadays) and using docker (mostly because of laziness, e.g. Elastiflow was easier to deploy this way). Finally, I have a single dokerized Ubuntu that's more like a VM (as in, I never had a dockerfile for it, it's fully stateful) running the matter home automaton bits because I gave up on properly containing the matter python stack and went for an easy way out.
Now, a word about alternatives.
I used to run Ubuntu. No more. Upgrading the OS is always a huge pain even if everything is in docker. I want my OS to be managed in a config file and be able to easily roll back to the previous state.
I used to run k3s, but even though it is much thinner than k8s, it is still very much ram hungry and I just don’t want to pay for that. Besides, complex networking is often non-trivial due to how its networking works, and multus is a world of pain.
I used to run different hypervisors for the VMs (kubevirt, tinyvmm, a bunch others). I went way back to libvirt mostly because it’s straightforward in tuning very specific qemu bits I cared for in the homelab. I have some cpu overprovisioning, so I want to make my quotas set up extremely precisely, sacrificing the right workloads.
I've got a homemade NAS running unRAID and my arr suite/Jellyfin/qbittorrent, and an orangepi running the orangepiOS (flavor of Ubuntu I think?) Which handles home assistant and associated containers .
My 3 hosts all run Proxmox. Publicly available services run in VMs, usually running Ubuntu. Private services are usually Docker containers connected directly to my TailScale network running directly on the host.
Had various issues with Debian Bookworm, not being able to install the “server” meta package on one server which left me without all the basics but “apt” and issues with lost IPv6 connections that made me switch to Ubuntu
I am running Ubuntu server and I am... satisfied with it. It does what it should, no problems, nothing to worry about, stable AF (as any mature distro?).
But lately I am thinking about switching to fedora server (I need to reset my system one way or another, because my space on the hard drive for the system ran out of space (it was a small drive)). I am using fedora on my work machine and I really like it, so I thought I could give fedora on my server a try.
Proxmox on physical servers hosting a variety of vanilla Debian installations. I have a physical router running pfsense as well as two HP miniservers running OpenMediaVault.
TrueNAS scale with truecharts for my main server.
I'm considering rolling out a compute only server with openSUSE microOS to leave trueNAS only for storage.
I like openSUSE philosophy and microOS with cockpit seems solid on my tests. I might start with some old laptops and deploy the ldap server there to test it and remove that responsability from TrueNAS.
Ubuntu, but I'm very strongly considering switching out to Debian or Rocky. Ubuntu has a lot of really unnecessary cruft that I think I'd be better off without.
I use Fedora on my laptop now, so going the RHEL/Rocky/Alma route for my servers is really tempting. Especially as I'm also considering switching to Podman.
Ubuntu LTS because that's what I was most comfortable with at the time, now I'd really like to switch over to Debian but I'm not sure I can be bothered until I really have to, everything is working well at the moment. It's running in Proxmox.
One with Arch that runs gameservers for my friends, and another that runs Proxmox filled with either Arch or Debian in the containers depending on what it is in them.
Proxmox with latest Debian i guess, but all virtual machines are on Ubuntu 23.10. It's just the easiest to install things on, and have a semi-recent kernel, even though I would never use it on the desktop.
I actually wanted to run arch in the vms but I couldn't find any image with the latest arch. The latest I found was with kernel 6.3.
Unraid, most services I self host run in docker here. Things like plex/jellyfin, nextcloud, unifi could controller.
Proxmox, used to virtualize my pfsense after I moved away from my unifi USG router. A few Linux and Debian headless virtual machines run here as well. Had pihole virtualized here as well but switched over to pfBlockerNG to consolidate.
TrueNAS, all my media shares. I also sync my desktop environments here to have a consistent windows desktop across my desktops and laptops.
Home assistant running on home assistant yellow. Runs a few add-on services.
Arch for stuff I have physical access to. Nothing's ever gone wrong, so it's worth it for the immediate updates and consistency with my other systems. For VPS I use Debian though, occasionally the unstable/Sid branch if I really need the latest updates. There are almost always Debian images available on a VPS.
Fedora core os (FCOS) vms on XCP-NG with trueNas for persistent storage. With FCOS, vms configurations can stay version controlled and deployed using open Tofu (terraform) and butane/ignition.
Boxes that physically live in my home are mostly Manjaro. They’re also not externally accessible from the internet.
Anything in the cloud I standardize on Debian. Two distros and consistency makes maintenance much easier.
Anything in a container runs whatever it was built on because porting a docker compose file from, say, Alpine to anything else is just not worth the time and energy.
I tried to use fedora server or was it cloud? Idk but I tried fedora as a server and wanted to set up a VM but got confused. Storage pools scared me away. Will try to learn it when I have the time