I set up a new home server recently using containerized services, and I wanted to share what I learned. Nothing here is revolutionary, but this is the type of resource I wish I had when I started.
I'm open to feedback on what I could have done better!
That's a great point about Ansible. Compose automates most of the setup, but automating all of it would be amazing. I'll try it with the next service I set up, and if it goes well, I'll document it. Thanks for the suggestion!
Do you do some sort of versioning/snapshotting of your services? I'm on the compose route as well, and have one btrfs subvolume per service that holds the compose.yml and all bind-mounted folders for perstistent data. That again gets regularly snapshotted by snapper.
What leaves me a bit astounded is, that nobody seems to version the containers they are running. But without that, rolling back if something breaks might become a game of guessing the correct container version. I started building a tool that snapshots a service, then rewrites the image: in compose.yml to reflect what ever the current :latest tag resolves to. Surprisingly, there doesn't seem to be an off-the-shelf solution for that...
How do you do that? I'm building a similar system now that automatically updates my containers. I've played around with the API and I can see which versions are attached to the latest sha265, but I can't find a way to automatically tell which version it is. Especially when the same sha is linked to multiple versions
I don't do a great job of this, but take Immich for example. There, I specify the version in the compose.yml (technically, the version is in the .env file and substituted into the compose.yml). At that point, updating Immich is a matter of updating the version number and restarting the service.
These configuration files are all managed with git, so when I do these updates, I create a new commit. I just checked, and I have Forgejo pinned to a specific version in its compose.yml as well. But unfortunately, the other services are referencing :latest. I'm going to go back and pin them all :)
I built a small tool that does that for me now and published it: https://feddit.de/post/2909288 maybe you'll find it useful, no guarantee that it doesn't break something though :D
I have recently done something very similar though I did use Docker. Instead of VPN in the future, have you seen cloudflare tunnels? They are awesome, you do not have to expose any ports on your network and you can authenticate users with SSO. Also, not sure if it’s possible with Podman but I was able to get my Caddy setup so that the caddyfile is created and maintained automagically using a plugin.
Not OP, i've heard nothing but good things about cloudflare tunnels but for me they have two major drawbacks. The first is you can't use them for a self hosted media server such as jellyfin as it violates their terms of service. The second is you have to trust them with all your traffic. Now i have no reason to think they would do anything nefarious but i'm at the point in my threat model journey that the less i trust in any corporations hands the better. Just my two cents.
I find it disappointing that everyone's first suggestion in the selfhosted community is Cloudflare. It seems to run counter to the spirit of selfhosting to hand off the last part to the giant corporation that controls 90% of the Internet.
Most of what Cloudflare does—if it's necessary at all—can be replicated with a cheap (or free) VPS sitting in front of your network on a VPN, and the remainder doesn't matter for most selfhosted use cases.
I am familiar with tunnels in general, though I haven't seen Cloudfare's offering. I'm with the sibling comment that it's another external dependency when my goal is to self host more. But that's a tool I'll keep in mind!
I'll have to check out Caddy plugins! One I saw called the docker command to discover exposed ports, meaning it depends on Docker explicitly. I'll see what I can dig up wrt Podman. This sounds very interesting. Thanks!