I woke up this morning to a text from my ISP, "There is an outage in your area, we are working to resolve the issue"
I laugh, this is what I live for! Almost all of my services are self hosted, I'm barely going to notice the difference!
Wrong.
When the internet went out, the power also went out for a few seconds. Four small computers host all of my services. Of those, one shutdown, and three rebooted. Of the three that ugly rebooted some services came back online, some didn't.
30 minutes later, ISP sends out the text that service is back online.
2 hours later I'm still finding down services on my network.
Moral of the story: A UPS has moved to the top of the shopping list! Any suggestions??
When you are bored, backup a VM then hard kill it and see if it manage to restart properly.
Software should be able to recover from that.
If it doesn't, troubleshoot.
When I built my home server this is what I did with all VMs. Learned how to change the start up delay time in esxi and ensured everything came back online with no issues from a cold built.
IMHO you're optimizing for the wrong thing. 100% availability is not something that's attainable for a self-hoster without driving yourself crazy.
Like the other comment suggested, I'd rather invest time into having machines and services come back up smoothly after reboots.
That being said, an UPS may be relevant to your setup in other ways. For example it can allow a parity RAID array to shut down cleanly and reduce the risk of write holes. But that's just one example, and an UPS is just one solution for that (others being ZFS, or non-parity RAID, or SAS/SATA controller cards with built-in battery and/or hardware RAID support etc.)
I agree that 99.999% uptime is a pipedream for most home labs, but I personally think a UPS is worth it, if only to give yourself the option to gracefully shut down systems in the event of a power outage.
Eventually, I'll get a working script that checks the battery backup for mains power loss and handle the graceful shutdown for me, but right now that extra 10-15 minutes of battery backup is enough for a manual effort.
Some of the nicer models of UPS have little servers built in for remote management, and also communicate to their tenants via USB or Serial or Emergency Power Off (EPO) Port.
You shouldn't have to write a script that polls battery status, the UPS should tell you. Be told, don't ask.
The problem is that for most self-hosters, they would be working and unavailable to do a graceful shutdown in any case even if they had a UPS unless they work fully from home with 0 meetings. If they are sleeping or at work, (>70% of the day for many or most) then it is useless without graceful shutdown scripts.
I just don't worry about it and go through the 10 minute startup and verification process if anything happens. Easier to use an uptime monitor like uptimekuma and log checker like dozzle for all of your services available locally and remotely and see if anything failed to come back up.
Did the services fail to come back due to the bad reboot, or would they have failed to come back on a clean reboot? I ugly reboot my stuff all the time, and unless the hardware fails, i can be pretty sure its all going to come back. Getting your stuff to survive reboot is probably a better spend of effort.
I didn't mean to imply that Services actually broke. Only that they didn't come back after a reboot. A clean reboot may have caused some of the same issues because, I'm learning as I go. Some services are restarted by systemctl, some by cron, some....manual. This is certainly a wake up call that I need standardize and simplify the way the services are started.
I reboot every box monthly to flush out such issues. It's not perfect, since it won't catch things like circular dependencies or clusters failing to start if every member is down, but it gets lots of stuff.
Yeah an unclean reboot shouldn’t break anything as long as it wasn’t doing anything when it went down. I’ve never had any issues when I have to crash a computer unless it was stuck doing an update.
A lot of stuff on there isn't worth buying either, like anything from APC. If you want good stuff, just get Eaton.
But also you have to understand that UPSes aren't set and forget. The batteries need replacement every 3-5 years. And they're not for extended outages, they're mostly to bridge the gap between mains power going out and a generator starting up.
Personally I just have everything running from docker-compose, so I run one command and everything not running gets started. I don't worry about stuff being down for a bit.
What's wrong with APC? I have one for 6-7 years. I've changed the battery once and I think I'll have to change it again this year. I didn't have any problems with it.
A general tip on buying UPSes: look for second hand ones - people often don't realise you can just replace the battery in them (or can't be bothered) so you can get fancier/larger ones very cheap.
Why? If the power has gone out there are very few situations (I can't actually think of any except brownouts or other transient power loss) where it would be useful to power my server for much longer than it takes to shut down safely.
Though I wonder if even besides adding an uninterruptible power supply (UPS) (writing acronym out for anyone else who would've had to Google it), this might be a useful exercise recovering from outages in general. This is coming from someone who hasn't actually done any self hosting of my own, but you saying you're still finding down services reminds me of when I learned the benefit of testing system backups as part of making them.
I was lucky in that I didn't have any data loss, but restoring from my backup took a lot more manual work than I'd anticipated, and it came at an awkward time. Since then, my restoring from backup process is way more streamlined.
My favorite part about using an old laptop as a 24/7/365 plugged-in server is the anticipation of when the lithium battery will explode from overcharging.
"overcharging" doesn't exist. There are two circuits preventing the battery from being charged beyond 100%: the usual battery controller, and normally another protection circuit in the battery cell. Sitting at 100% and being warm all the time is enough for a significant hit on the cell's longetivity though. An easy measure that is possible on many laptops (like thinkpads) is to set a threshold where to stop charging at. Ideal for longetivity is around 60%. Also ensure good cooling.
Sorry for being pedantic, but as an electricial engineer it annoys me that there's more wrong information about li-po/-ion batteries, chargers and even usb wall warts and usb power delivery than there's correct information.
I'm a big fan of running home stuff on old laptops for this reason. Most UPSs give you a few minutes to shut down, laptops (depending on what you run) could give you plenty of extra run time and plenty of margin for a shutdown contingency.
They will do power conditioning? My modem is such a sensitive baby I cannot plug anything else in next to it or it starts dropping packets. Would a UPS help with that? Unfortunately I cannot replace the modem, that's the only one the ISP will give me.
Yes. An online/double-conversion UPS will be the most effective, because it actually runs off the battery the whole time, so it's disconnected from any line quality issues.
A line-interactive UPS is cheaper, but doesn't do full power conditioning.
An offline UPS doesn't do any at all, only comes online when power drops.
This is why I gave up self hosting. It's great when it works but it just becomes an expensive second job. I still have Plex/Jellyfin etc but for emails and password vaults I just pay for external services.
I like to host as many services as possible and I’m fine with it being a second job at times since this is my main hobby, but I actually agree with you on your examples. The three things I won’t self-host are:
Emails - I am not willing to put in the effort on this. Plus, my ISP blocks those ports so I’d already be into using a VPS even if I wanted to host this. I’d rather just pay someone else, like Proton.
Password manager - I actually did self-host Bitwarden for a long time, but after thinking about it for a while, I decided to take the pay someone else approach here too. I’m pretty sure I’m doing everything correctly, but I’m not a security expert. I’d rather be 100% sure my passwords are in safe hands rather than be 95% sure that I’m doing everything right on this one.
Lemmy - I’ve heard about (luckily never seen) CSAM attacks on Lemmy/Kbin and will not risk that kind of content being downloaded because I’m federated with an instance dealing with those attacks. I’m happy to throw a couple bucks at lemmy.world’s Patreon and let them handle that.
I self host stuff that I feel the need to. But TBH, you don't really need to self host much, outside of media collections. PhotoPrism and JellyFin are about the only two I need, aside from a PiHole. Most folks would be fine with a beefy NAS.
This is why I have about five of these bad boys: CyberPower CP1500PFCLCD.
One is in my utility room for my cable modem and our chest freezer, three back up my homelab and wifi AP, and one is for my office.
They've been bulletproof through storms, and when we've lost power, but not Internet I can't keep on working.
The big thing to look for is number of battery+surge outlets vs just surge outlets. Typically they top out at 1500VA - the more overhead for what you're powering, the longer you can go without mains power.
A screen/display is helpful for at-a-glance information like expected runtime, current output, etc.
Never heard of someone using a UPS on a Fridge/Freezer.
Does it make a difference? Seems like the UPS would just died after 10-20 minutes and not really make much difference to your freezer.
I didn't intend to use it on the chest freezer - it was mostly for the modem, but since I had spare battery capacity and outlets I thought what the heck.
The power load is practically nothing until it cycles, and even then it's fairly efficient - my current runtime is estimated to be about 18 hours, more than enough to come up with an alternative if we lose power in a storm.
I use a laptop and external jbod covered with a low power ups. As other said, the point is to bridge powergaps now long term working powerless. I live in the countrisied, so small powergaps happens specially when my photovoltaic don't produce (no, i have no battery accumulators, too expensive)
Figure out how much power your servers use on average with the help of a wattage meter, then enter that number and how many minutes battery backup you want in Eatons UPS Power Calculator to find a suitable unit. I'm sure other vendors have similar tools too.
look at efficiency. It's not always neglible, was like 40% of my energy usage because I oversized the UPS. The efficiency is calculated from top power the UPS can supply. 96% efficient 3kW UPS eats 4% of 3kW, 120 watts, even if the load you connected is much smaller than 3kW
look at noise level. Mine was loud almost like a rack server, because of all the fans.
I replaced that noisy, power hungry beast with a small quiet 900W APC and I couldn't be happier
when you say some services on your network are you talking about machines or softwares?
for machines yes ups makes sense for softwares writing some scripts to run on start up should be enough another alternative can be setting up wake on Lan that way you can bring all up again wherever you are
Could also be a good opportunity to add a service monitor like Uptime Kuma. That way you know what services are still down once things come back online with less manual discovery on your part.
Thats no longer a UPS.
You could get something like a powerwall, something designed to power things from batteries for a long time.
Or get a generator with an automatic failover. The UPS then covers the downtime between powerfailure and generator taking load
It requires an LTE capable gateway and a data plan. As for the rest you can simply write your routing tables so that if the main gateway doesn't work, use the secondary gateway with lower prio.
I feel your pain. Just the other day the disk on my home assistant machine died after a power outage and I had to replace it with another disk and restore from backup.
This thought came to me this morning. I have 4 machines both because the BEAST grows organically, and because we're always trying to avoid that single point of failure. Then a scenario comes along that makes you question your whole way of thinking, diversifying may actually create more problems