How do you monitor your server containers, disks, load...?
Do you use an easy-to-use web interface? Do you do everything via SSH? Or maybe you've got a more complicated setup?
I want to change my setup and I'm looking for new ideas, I've been using Cockpit for some years and some of the plugins are really outdated (ZFS for example) and others are completely broken (docker-compose).
Now I am intrigued to develop one that is called YOLO.
But just in case: no, I don't monitor my server. If I notice something not working, I ssh into the machine and check what's up. I don't want to deal with another zoo of services for the monitoring part.
This is the first time I’ve heard of Victoria Metrics. It looks like it has a similar use case as Prometheus, is that correct? If so, what made you or your team choose one over the other?
I'm a huge fan of Netdata, very configurable and monitors just about anything you could want. Great interface and alerts too - https://www.netdata.cloud/
As others stated, you can run and access the interface locally (or setup your own reverse proxy) for free. Their Cloud dashboard is also free for up to 5 nodes. They recently added a flat-rate "Homelab" plan as well, if you want to remove the limit. It's all quite usable for $0 otherwise though!
I love how easy to use NetData is, but when running it on my home servers it destroys their performance lol. Every once in awhile I check in to see if it runs better.
That's strange, I've run it fine on some very underpowered hardware. Are you adding a specific monitoring integration with it, or just out of the box settings?
Node exporter on hosts, OpenTelemetry collector to scrape metrics and collect logs, shipping them to Prometheus and Loki, visualising with Grafana.
Day job is for an observability platform where we heavily encourage the use of (and also contribute) to the OpenTelemetry collector project, hence my use of it.
Try VictoriaMetrics. Basically the same feature set as Prometheus, but so much more resource friendly for homelab scale. I store some metrics for 12 months now, because it's easy.
Similar setup here with additional exporters like cadvisor for container metrics and other components.
OpenTelemetry is awesome, but still a very fast moving project. Expect therefore more frequent updates and changes compared to more older and established projects.
I can see simple CPU/RAM/storage stats and got widgets for almost all services, one of them is portainer so I can see if any service is stopped (most of them are running in docker). Also few services send notification on error or update
I know its not really a monitoring tool, but it works well enough for me
It's free, you host it yourself. It's built off of nagios, compatible with nagios plugins, supports snmp or agent based checks. It can email, SMS, slack or discord you when something breaks, you can write your own custom checks in any language that can output to a local console... I could never imagine even looking for something else.
I started using Checkmk recently after it was mentioned here and I really like it. I'd used Zabbix a bit but was annoyed at how much work it took to get it to do what I want. Checkmk was a lot better right out of the box.
For light touch monitoring this is my approach too. I have one instance in my network, and another on fly.io for the VPSs (my most common outage is my home internet). To make it a tiny bit stronger, I wrote a Go endpoint that exposes the disk and memory usage of a server including with mem_okay and disk_okay keywords, and I have Kuma checking those.
I even have the two Kuma instances checking each other by making a status page and adding checks for each other's 'degraded' state. I have ntfy set up on both so I get the Kuma change notifications on my iPhone. I love ntfy so much I donate to it.
For my VPSs, this is probably not enough, so I am considering the more complicated solutions (I've started wanting to know things like an influx of fali2ban bans etc.)
Grafana set up to run on the server locally, then I connect to it via SSH forwarding. Then I can view all kinds of metrics in my browser in a neat interface.
Adding my vote for Zabbix. It was a bit of a bear to set up and I had to write custom scripts to install the agents with TLS settings that were secure enough for me, but once it's all set up it's amazingly easy and intuitive to use and incredibly customizable.
Grafana, fronting information from Prometheus, Loki and Telegraf/influxdb since I'm used to that from work and has been a bit more set and forget compared to node_exporter. Easier to add in plugins as well instead of a new container/service to scrape.
I use collectd and graphs on my openwrt router. It can even use data from mqtt-connected thermometers and gather metrics from other collectd instances.
docker-compose doesn't scale well and if you run it natively it is a little less secure.
Virtualization adds 1-2% of overhead at most and gives you way more control of how the hardware is used.
If you setup is small docker-compose might be easier to manage but as soon as you get more hardware it becomes the limiting factor. I still use docker-compose but now I run it in a VM
Used to not monitor at all, but after setting up crowdsec I got completeky into prometheus + grafana system resource and security monitiring on my dedicated hetzner server.
I also keep uptime kuma on another cheap vps to monitor the states of websites directly for issues and I have seted up watchtower to send ntfy notifications on updates so I can know an update is the one fucking everything up.
Recently also setup restuc backups so I made it so I also get backup health check logs as ntfy notifications on my phone, which really helps me keep everything runnig.
What I really need to also do is create prometheus/grafana alerts for additional things to get notifications on my phone for them also ( like when crowdsec starts to randomly not get any more acqusitions, so I have to restart it. Once passed over a week before I looked at grafana and caught that ).
Note: this is all a hoby, I also don't host anything at home out of a couple reasons, most important being internet and hardware is expensive af here so it's simply cheaper for me to play around with vps's and hetzner dedicated servers.
For the physical hosts / bare metal I use fluentbit, with Loki as the backend. Grafana for visualization and alerts. This gives me utilization metrics and uptime monitoring.
The app containers themselves I do not monitor.
Prometheus + node/container exporters. + Grafana for dashboards
I haven't touched zabbix in years but last time it didn't support very well dynamic scalations.
Also all of them are focused on monitoring infrastructure, you need to pay if you want APM or UX.
Enterprise level, for APM I like datadog, much better than NR. For UX we use acoustic tealeef.