How to prevent rogue docker containers from wreaking havoc?
There was a recent post about whether to enable ufw and it made me ask: how protected I am from a rogue docker container? I have a single server with 15-20 docker containers running at any given time. Should one get hacked or be malicious from the get go, are there (hopefully easy to implement for an armchair sysadmin) best practices to mitigate such an event? Thanks!
Some good advice here. I would say avoid using network_mode: host unless you really have to. And make use of no-new-privs feature. This is easy to do and IMO bare minimum for preventing rogue actions from containers.
But aside from specific containers and guidance, general network and system hardening guidelines would apply. You can look up plenty of server hardening guidelines via google. General principles such as least privilege, segmentation via VLANs and firewall rules, user ownership/privilege for accounts and services, will go a long way. Keep defense in depth in mind, so 1 control is none, 2 is one, and you can always find more ways to make something secure up to and including removal. The most secure thing, is a thing that doesn't exist.
There are also automated tools that can perform scans and 'audits' on your system, or your containers, to guide you on specifics you can adjust (such as lynis) and help lock you down in a more systematic way. These tools can be automated, report on a scheduled basis or one time use. One of those is your best bet for targeted and effective controls.
Never heard of podman but what I read on google is that its a drop in replacement for docker. Even read you can alias podman for docker. So does that mean we can just use docker images and docker compose files with podman? Are there drawbacks for using podman instead of docker?
The thing about containers is they usually have no NÉED in general for pure ope file system access. No need for full network access (host, LAN, WAN). So the smaller the privileges the better. So even if it is compromised there’s very little you can do with it.
This is also a general principle for network management. For instance when does the TV need to print or access any server other than Jellyfin?
This is not true sorry. Even in k8s any container has access to any other container in the same pod or in dockers case on the same host. In k8s you can at least add network profiles. If its a host or MACVLAN container it gets worse if no proper isolation is configured on the network level.
You can run your containers through a vulnerability scanner like Trivy and then patch with Copacetic. It will only fix the container image's OS vulnerabilities though, not the app code dependencies.
Otherwise one step simpler is you can just vulnerability scan the containers, look at the issues, then decide if you want to deploy them.
"Only" having an authenticator doesn't stop malicious containers from reaching outside. Least privileges and network segmentation is the minimum necessary.
Safe-r. Not inherently safe. It's one good practice to consider among others. Like any measure that increases security, it makes your service less accessible - which may compromise usability or interoperability with other services.
You want to think through multiple security measures with any given service, decide what creates undo hassle, decide what's most important to you, limit the attack surface by making unauthorized access somewhere between inconvenient and near-impossible. And limit the damage that can be done if someone gets unauthorized access - ie not running as root, giving the container limited access to folders, etc.
Only give the container access to the folders it needs for your application to operate as intended.
Only give the container access to the networks it needs for the application to run as intended.
Don't run containers as root unless absolutely necessary.
Don't expose an application to the Internet unless necessary. If you're the only one accessing it remotely, or if you can manage any of the other devices that might (say, for family members), access your home network via a VPN. There are multiple ways to do this. I run a VPN server on my router. Tailscale is a good user-friendly option.
If you do need to expose an application to the Internet, don't do so directly. Use a reverse proxy. One common setup: Put your containers on private networks (shared among multiple only in cases where they need to speak to each other), with ports forwarded from the containers to the host. Install a reverse proxy like Nginx Proxy Manager. Forward 80 and 443 from the router to NGM, but don't forward anything else from the router. Register a domain, with subdomains for each service you use. Point the domain and subdomains to your IP, or using aliases, to a dynamic dns domain that connects to a service on your network (in my case, I use my Asus router's DDNS service). Have NGM connect each subdomain to the appropriate port on the host (ie, nc.example.com going to a port on the hose being used for NextCloud). Have NGM handle SSL certificate requests and renewals.
There are other options that don't involve any open ports, like Cloudflare tunnels. There are also other good reverse proxy options.
Consider using something like fail2ban or crowdsec to mitigate brute force attacks and ban bad actors. Consider something like Authentik for an extra layer of authentication. If you use Cloudflare, consider its DDOS protection and other security enhancements.
Keep good and frequent backups.
Don't use the same password for multiple services, whether they're ones you run or elsewhere.
Throw salt over your shoulder, say three Hail Marys and cross your fingers.