I had one Linux server that was up for over 500 days. It would have been up longer but I was organizing some cables and accidentally unplugged it.
Where I worked as a developer we had Sun Solaris servers as desktops to do my dev work on. I would just leave it on even during the weekends and vacations, it also had our backup webserver on it so we just let to run 100%. One day the sys admin said you may want to reboot your computer, it's been over 600 days. 😆 I guess he didn't have to reboot after patching all that time and I didn't have any issues with it.
Mine are running all of the time, including during power outages, and are only shut down for physical maintenance and reboot for software maintenance.
This is a little variable through. Windows hosts tend to require more frequent software reboots in my experience. About once a year, I physically open each device and inspect, clean dust (fairly rare to find it for my setup though), and perform upgrades, replace old storage devices and such. Otherwise I leave them alone.
I usually get about 5-7 years out of the servers and 10 out of networking hardware, but sometimes a total failure occurs unexpectedly still and I just deal with it as needed.
Never! I have 2 mini pcs in separate locations running 24/7. One for downloading content, and running a DNS server/dynamic dns. The other for point-to-point VPN to access multiple NVRs that are blocked from the WAN itself. Luckily they both sip little power!
I mean, so far the longest uptime I’ve seen at my current job is 9 years. Yes, that host should be patched. But given its role, and network access, it’s fine. Running strong. It is in a DC. Server grade hardware is designed with 24/7 operation in mind. Consumer hardware might be fine, but wasn’t designed with 24/7 critical operation in mind.
At home, I have some nucs on 24/7, and a r740 and nx3230 on 24/7. The rest is for true lab env and I only power on as needed.
Even though live kernel patching is a thing, I generally do a full reboot every month or two for the next big patch.
Full shut downs? Are we upgrading them, dusting them, or doing any other maintenance to them? That would be the only case besides UPS failure or power outage.
I have two hosts: raspberry pi that serves as a pi-hole and as a log of infrequent power outages, it goes 24/7, often with 100+ days of uptime (seeing the "(!)" sign in htop is so satisfying) and a SFF that shuts itself off nighty, provided nothing is happening on it (power is expensive).
Everything in my lab is up 24/7 unless my UPS shuts it down in a power outage, if I'm doing any work inside the chassis or if I'm updating something. If you can handle the power bill, no real harm keeping it online all day.
old 486 slackware 4.0 server I had on a big UPS made it through several dorm/apartment moves without a shutdown. Something like 7 years of uptime when I finally retired it.
My optiplex 9010 sff is what I use for experimenting with services and as a staging area for moving VMs to my main lab because it's air gapped. At max load it runs at 140w but it has a GTX 1650 that I use for gaming as well.
Otherwise the rest of my lab is only turned on when I'm using it or forget to turn it off when I leave the house. When I get a laptop again I'll leave it on more. None of it is more than $150 to replace though. It's a Hyve Zeus, Cisco isr 4331, and a catalyst 3750x so nothing heavy, just a little loud.
Never really shit my mini pcs down, sometimes I restart a proxmox node if I want it to use an updated kernal but that's it. I don't run large servers at home
It depends. I don't run anything public facing so security updates that need reboots are less of a concern to me
My Windows servers are rebooted once a month for patches. My Linux servers maybe once every couple months for kernel patches or if I screwed something up.
My physical proxmox hosts? Twice in the last year. Once because I moved. The other time because I upgraded to proxmox 8.
I shut down my NAS after work because I tend to not use it's services outside yet and saving like 2/3 of a day in electricity is worth it. For the machines that provide services like networking and security they run on UPS 24/7 up until there is a need to update or a UPS has a failure
My chassis has 7 blades in it, and I typically only keep 4 powered on. However, I patch them regularly, requiring reboots, but I don't have to take any VMs down with DRS.
You don’t (and generally shouldn’t) reboot servers. People got this idea that PCs needed to be rebooted because Windows is trash and becomes more unstable the longer it runs. Server OS’s dont have this problem.
Summer every day in the afternoon for heat and power usage (time of use bills triple from 3-9pm). Scripted to run on one host per site for must have apps.
Winter - once a month for the weekend after patch Tuesday. It’s a chance to check for cables being nibbled/cleaning/other things needing doing.
Fairly frequently,but no real schedule. 3wks-3months, whenever I get some time to update without it being a problem. Primarily for patching & new kernels reasons but has caught the odd disk issue where btrfs was struggling and I didn't catch when running.
My Proxmox VM host ran for well over a year and I had to shut it down to add more RAM when I finally bought it. A couple VMs on it ran for just as long. All Linux stuff. Windows guest have to reboot minimum every 90 days or things start getting weird, just a DC
Do you virtualize/pool host to separate function from hardware? If yes, then go nuts shutting off hardware as needed for service.
Otherwise, the correct answers are “annually as part of a practical DR review”, “only when the electric company cuts you off for non-payment”, and “as often as needed to keep a spouse off your back”.
I shut my 4U desktop off at night to save energy unless it's running some overnight compute job. NAS goes into sleep mode but stays on. Switch, router, home assistant NUC stay on.
I don't reboot servers in my homelab unless any update require me to do so. I do have a clustered Proxmox setup, so no downtime if the admin (aka me) doesn't screw up ;-)
The only valid reason (imho) to reboot unless any update requires it would be apps with memory leaks where a service restart doesn't fix the problem. Not often I face this problem these days, but earlier versions of Windows had the occasional habit...
I suppose it depends what kind of hardware you're using. I have enterprise class servers that are meant to run 24/7 and they do. They'll be useless technologically before they wear out.
I keep the stuff running 24/7, barely once a year for cleanup pretty much / upgrades or whatever. Don't mind me I still got to get a UPS for when the electricity goes down which it hasn't happen in the past years
Network services 24/7 (4 rpi3)
Nas shutdowns every day at 23:30 and boots at 9:00, except we that boots at 10:00. Apci schedule management is embedded in firmware (qnap).
Servers shutdown at 23:15 and boots at 9:15 (we 10:15). For these rtcwake does the job.
I have built a UPS with 200AH 12V battery with inverter charger for RV. It never fails with power so it runs like for months until I decided to put something in… let’s see
It kind of also depends on the OS.
And before the comments start; As with anything, situation & a bit of luck go a long way.
But Linux based machines can be left enabled for months, some times even years. Windows I Honestly wouldn't trust beyond a few months and even that would seem too much for my own taste.
I reboot my systems monthly most of the time, usually paired with updates. But my main host is Windows serv, which gets daily reboots (power savings, I don't need it on when I sleep), and the VMs on that are frozen & unfrozen so they are on for about a month or more until I do the above.
I run ESXi on most of my systems. So that means, when there is an update of ESXi, I install the updates and reboot them.
Sometimes I need to change hardware or upgrade stuff. Then too.
I took my docker host offline yesterday, because of a RAM upgrade (16GB > 24GB, yeah, I'm aware I lost dual-channel). I regularly check for updates on non-ESXi machines.
Some people love 100% uptime of their servers. I hate it. When somebody has high uptime, it means they are lazy and don't keep up with updates, which are critical most of the time.
In the year mine has been running... it's been offline twice. Once, when upgrading the memory. The other was when I upgraded the processors. The only other time was a software update. Didn't require a reboot.
I have been turning mine off more frequently now that my electric rates have jumped 30% in the last two months. I'm currently looking to dump the 11 year old server hardware for both my nas and hypervisor server and consolidate everything into a modern lower power single system. Most likely using Truenas Scale.
I just recently done a hardware upgrade on a homelab machine that had been on running proxmox for 2years with no downtime.
I never had a reason to shut it off since it was running important vms. Fans all had filters do cleaning was done while machine was running. And because of dual PSUs. UPS battery changes were easy to do too without needing downtime
I boot my big server whenever i need it, everything else is 24/7. I have had no catastrophic failures in either for the last 2-3 years, so it seems to be fine?
Once a month to install patch Tuesday updates because my only host is still running Microsoft Hyper-V 2019 server. Planning to switch to Proxmox that but gonna take a while so I haven't got myself around to do it.