Do you have suggestions for kernel tweaks for getting the most out of a RAM limited system?
I am running a service requiring 2GB of RAM (netbird) on a VPS which has just 1 GB of memory. I am doing so because I am a stingy bastard and I use only free VPSs for my personal use so I get what I am paying for.
Because of this hardware limit in about 12 hours from service start I begin swapping a bit too much. This would still be manageable but soon the hypervisor gets really pissed and steals up to 90% of the CPU. So the only solution so far is restarting the docker containers every 12 hours (not great, not terrible).
Looking to improve this, Iam now experimenting with ZRAM and swappiness and it seems some benefit can be achieved by using some of the Linux kernel feaures. Is there anything else I should look into?
Never run anything on a free tier if you want to keep it. My legit Oracle account got flagged and deleted for no reason so they're dead to me, and I'll tell anyone who will listen. They're thieving assholes and will snatch away free tier the moment it's convenient for them.
Do you really need netbird? What for?
Spend some money, consider it the cost of education
Yes, this is a possibility. the ARM VPS is already running something else, but if I manage to run netbird behind a reverse proxy I can also move it there. BTW there are also 1 GB free VPS on azure (for students) and Google Cloud, but you guessed right.
Turn on swap! That way more of your actual ram can be used by the application. Yes, it will probably be slow as crud but it should use more memory than it has. If swap is already on.... sorry
I forgot to mention, I had plenty of swap available, now I disabled swap to force zram usage. I still need to see what happens running with both, it's hard when each trial takes 12-24 hours to show its result.
Oh yikes yeah I would not be good at that pace 😅 I know you can rebuild the kernel with less features to reduce ram usage, but other than that I would be looking at what you're already doing.
Have you considered removing docker from the equation and running directly on the host os?
This service consist of several docker containers, without docker I would not even know where to start for deploying it. Maintainance as well would be a mess, totally not an option
I think the advice should be taken to heart here --- you're dealing with a userspace problem but you're trying to get the kernel to make it all better.
You've already mentioned the two big things, compressed RAM and swap; optimizing userspace (or paying for more RAM) may be the only option at some point.
If you want to get creative, is there a reason you can't use a local computer for some of these services? An old raspberry pi or similar could potentially run some of your services. You could run some containers on your home server and call it a day. Quick search turned up this https://www.linuxserver.io/blog/routing-docker-host-and-container-traffic-through-wireguard
It seems to me you are already looking into all the possible options besides getting a VPS with more RAM. I am a bit confused on why you are seeing increased CPU usage by the hypervisor for this though.
The server is clearly overloaded, as soon as I start using some 10% of CPU frequently for some minutes (due to swap operations), the Hypervisor starts to throttle my instance and this of course makes the thing worse with an avalanche effect. When this happens steal time displayed from top can go literally as high as 90%.
On AWS they have something called "bursting". Basically they will let you use 100% of your vCPU, but not all the time. If you use it constantly they start to throttle you. That's explicitly stated when you rent an EC2 instance (which is their VPS). Perhaps your provider is doing something similar.
In my experience kernel tweaks aren’t going to be a major change on memory usage.
Most distros are meant to be full featured and not necessarily lightweight. So unless you are already running a minimalist distro, make sure you don’t have bunch of background services running you don’t need.
I can recommend using Debian Minimal iso’s, they require 256MB of mem. Depending on what features you enable you could use a lot more.
Thanks for sharing your experience, indeed the distribution is relevant here. I am running Arch (BTW) on this VPS which idles at about 300 MB with dockerd and containerd, I am not sure how does exactly compare to Debian on RAM usage (I have a couple of other VPS running debian which seem to use a little bit more RAM but it could be because those images are bastardized by the addition of cloud provider services). In any case my setup is pretty minimal, to get some large benefit there I fear I should use something without systemd :/
Yeah the 300meg isn’t going to get much less. Switching to Debian won’t change much there. Perhaps you can look into running a minimalist container distro if you are just using the machine for that. I personally want to check out Talos, there’s also RKE and Burmilla. No experience with them, to me the memory doesn’t matter much because I run a homelab. So I currently just run Debian and k3s. On my systems the containers are actually what gobbles up all the memory. If you’re using public container images, there’s a good chance the memory configuration on them isn’t optimal. Especially JVM services are a lot of the time configured to just use whatever is available. If you give them less memory they will do more garbage collection. So if CPU is less an issue then mem, that could be worth looking into (it’s just parameters you can pass on startup).
Hopefully any of this is of use. Good luck :)
If you're idling at 300mb with containerd running, you're not getting better than that with a modern general-purpose distro. As others have said, switching to another vps' free tier that offers more is your single best bet by a mile. About the only options you have on this one are compressed ramdisks being used as swapfile (zram) and literal disk swapfile to get you the rest. It'll be very slow though if you have to load half your workload on shared platter swapfile.