Don’t bother with disabling icmp. You’ll use it way more then it’s worth disabling, and something like nmap -Pn -p- X.X.X.0/24 will find all your servers anyways (same can be said for ssh and port 22. But moving that does stop some bots)
As long as i go out not exposing anything the the global internet, you really don’t need a lot. The fire wall should already deny all inbound traffic.
The next step is monitoring. It's one thing to think your stuff is safe and locked down. It's another thing to know your stuff is safe. Something like Observium, Nagios, Zabbix, or otherwise is a great way to make sure everything stays up, as well as having insights into what everything it doing. Even Uptime Kuma is a good test. Then something like Wazuh to watch for security events and OpenVAS or Nessus, to look holes. I'd even though in CrowdSec for host based virus detection. (Warning, this will quickly send you down the rabbit hole of being a SOC analyst for your own home)
Segment internally and restrict access. You don't need more than SSH to a Linux Server, or perhaps to it's web interface for an application running on it.
I just set up Wazuh at work and pointed it at a non-domain, vanilla Windows 11 machine to test and it came back with over 300 events immediately. Not trying to scare anyone off as I think it's a great tool, more just a heads up that the rabbit hole runs very deep.
Honestly, between the home lab being behind a RTR, NATed, patched & updated, and given the lack of users clicking on random crap and plugging in thumb drives from God Only Knows Where ... I'd go out on a limb and say it's already more secure than most PCs.
There's also no data besides what I already put on Medium and GitHub, so it's not a very attractive target.
Take a look at CIS benchmarks and DoD STIGs. Many companies are starting to harden their infrastructure using these standards, depending on the requirements of the environment. Once you get the hang of it, then automate deployment. DO NOT blow in ALL of the rules at once. You WILL break shit. Every environment has security exceptions. If you’re running Active Directory, run Ping Castle and remediate any issues. Audit often, make sure everything is being monitored.
I watch networkchuck on occasion, but some of his ideas are... questionable I think. Not necessarily wrong, but not the "YOU MUST DO THIS" that his titles suggest (I get it, get clicks, no hate).
Of the ideas you mentioned, (2), (3), (4), and (5) are somewhere between "reasonable" and "definitely". The rest are either iffy (unattended updates) or security theater (disable ICMP, change ports).
Something to keep in mind for step (2), securing SSH login with a key: this is only as secure as your key. If your own machine, or any machine or service that stores your key, is compromised then your entire network is compromised. Granted, this is kind of obvious, but just making it clear.
As for security theater, specifically step (6). Don't disable ping. It adds nothing to security and makes it harder to troubleshoot. If I am an attacker in a position for ping to get to an internal resource in the first place, then I'm just going to listen for ARP broadcasts (on same subnet) or let an internal router do it for me ("request timed out" == host is there but not responding).
Or host a VPN on it and get in through that. Many of these microservices are insecure, and the real risk comes from opening them up to the Internet. This is important.
I have a camera outside, I’m a pretty big guy, and my rack was built inside my office so it can’t be moved quickly.
Oh, you mean digital security? Lol I have a lot of subnets and don’t forward in much traffic. The WiFi password I give out gets you on my kids network. Plus I run DPI and IDS. I use cloudflare DNS (sometimes operating an internal pihole too). And I don’t browse social media on PCs only on mobile. The only holes punched from WiFi to internal are for printing. And even the wired clients are segregated from my work network.
Internet > Firewall, IP Whitelist, IPS/IDS yada yada> DMZ / VLAN > > Proxmox /w FW:$true (rule only for game ports) > GameServer > Deny all traffic from GameServer to go anywhere but internet
Proxmox server has firewall, all the hosts on proxmox have firewall enabled (in proxmox). Only allow my main device to access. No VLAN crosstalk permitted.
I don't bother with anything else internally, if they're inside they deserve to SSH with my default root / password credentials
Think of config changes which need manual adjustment, or a broken update. This is something you would probably not like to happen at night without notice. Could easily break your vital systems (e.g. homeassistant, authentication, vaults...)
Use unattended updates ONLY for bug and security fixes, nor for minor or major releases. Ensure you configure your auto-updaters properly!
Debian unattended-upgrades only upgrades packages from the main and security repos by default, so it should be fine since no major updates are performed within a particular Debian version.
Unattended updates are a recipe for trouble. I'd never enable that.
I have no public services apart from 2 OpenVPN servers. To access everything else I connect to one of the OpenVPNs and use the services through the VPN routings.
The VPN can only be accessed if you possess a cert and key. I could even implement 2fa but for now SSL auth works securely enough.
I run unattended-upgrades on all the debian/ubuntu deployments I manage. One of the deployments even has automatic reboots enabled. I still do major upgrades by hand/terraform, but the process itself works flawless in my experience.
VLANs. all VLANs are controlled by my Fortigate FWF-61E (soon to be replaced by a FG-91G). the VLANs have strict access permissions on a per-device basis on what they can and cannot access.
CORE network where the NAS live
only specific devices can access this VLAN, and most only have access to the SMB ports for data access. even fewer devices have access to the NAS management ports
this network has restrictions on how is accesses the internet
I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, deep SSL inspection, and intrusion protection activities
everything is logged, any and all incoming and outgoing connections both to/from the internet but also any LAN based local communications.
Guest wifi
can ONLY access the internet
has very restrictive web and DNS filtering
I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, basic SSL inspection, and intrusion protection activities
APC Network Management Cards
can ONLY access my SMTP2GO email client so it can send email notifications
it does have some access to the CORE network (NTP, SYSLOG, SNMP)
very select few devices can access the management ports of these cards
I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, basic SSL inspection, and intrusion protection activities
Ethernet Switch / WIFI-AP management
very select few devices can access the management ports of the switches
ZERO internet access allowed
ROKUs
restrictive web and DNS filtering to prevent ads and tracking. Love seeing the space where ads SHOULD be and seeing a blank box.
can access ONLY the IP of my PLEX server on the CORE network, on ONLY the PLEX port for the services PLEX requires.
IoT devices
Internet access ONLY except for a few devices like my IoTaWatt that needs CORE network access to my NAS on ONLY the port required for InfluxDB logging.
Wife's computer
because of HIPPA due to her job, i have ZERO logging, and no SSL inspection, but do have some web and DNS filtering.
print server
zero internet access, and only the machines that need to print can access.
as already indicated i have a fortigate router which has next generation firewall abilities to protect my network
while i do not have automatic updates i am notified when updates are available for my router, my NAS, the switches, and APC network cards. i always like to look at the release notes and ensure there are no known issues that can negatively impact my operations. I do have most of my docker containers auto-update using watchtower.
i keep SSH disabled and only enable when i ACTUALLY need it, and when i do, i use certificate based authentication
i have disabled the default admin account on ALL devices and made custom admin/root users but also have "normal" users and use those normal users for everything UNLESS i need to perform some kind of activity that requires root/admin rights.
on all devices that have their own internal firewall, i have enabled it to only allow access from VLAN subnets that i allow, and go even further by restricting which IPs on those VLANS can access the device
changing default ports is fairly useless in my opinion as once someone is on your network it is trivial to perform a port scan and find the new ports.
all windows based endpoint machines
have a strict endpoint control using fortigate's fortiguard software with EMS server. this allows me to enforce that machines have minimum specifications,
i use group policy to enforce restrictive user environments to prevent installation of programs, making system changes, accessing the C: drive etc as this prevents a decent amount of malware from executing
antivirus must be enabled and active or the endpoint becomes quarantined.
if the system has unusual behavior it is automatically quarantined and i am notified to take a look
even though the fortigate router blocks all ads and trackers i also use a combination of UBlock Origin to prevent ads and trackers from running in the browser as ADs are now one of the most common points of entry for malware
i use ESET antivirus which also performs and ties into the fortiguard endpoint protection to ensure everything on the machines is OK
for all phones/tablets i have Adguard installed which blocks all ads and malicious web sites and tracking at the phones level
this is not even all of it.
the big take away is i try to layer things. the endpoint devices are most important to protect and monitor as those are the foot hold something needs to then move through the network.
i then use network level protections to secure the remaining portions of the network from other portions of the network.
Anything that has internet access like your IoT can be C&C utilizing stateful connections. An outbound socket is built, and reflected traffic can come back in. Your IoT devices especially should not be exposed to the internet. They can't even have an antivirus agent installed on them.
They can't even have an antivirus agent installed on them.
That's actually no longer true... kinda. You can't install AV on them, but there are security companies filling the niche of embedded IoT security. Now, you won't see this in your average consumer device, but on the commercial market there is a growing demand for some way to secure an embedded device from malicious software/firmware modifications.
1.) The fortigate has a known list of botnet command and control servers that are blocked
2.) I only allow them to access their home server domain names for the only purpose of allowing for firmware updates. They are not capable of accessing any other domains or IPs
Replace Fortinet with Pfsense (+Suricatta/Snort) for non-propriety. (I have a Fortinet firewall and I can't bring myself to pay for their packages). One thing I'd recommend for you, as I host a lot of stuff is DNS Proxy though cloudflare, so the services I'm hosting are not pointing to my origin IP.
None of my services are available outside my house without first logging into the fortigate SSL VPN. That is the only open port I have.
The SSL VPN uses a loopback interface so only IPs from the US can access it, and I have strong auto block enabled and I add IPs of systems that try brute forcing into the box so they get blocked
I did forget to mention that I use cloud flair already for the exact reason you mentioned so my home IP is not used.
I also have a domain name with valid wildcard certificate. The domain is used to access the SSL VPN and I also then use the cert within my entire homelab so I have everything encrypted
I was not a fan of PF sense, the fortigate has more security features that I wanted
I use practical security measures that match my level of exposure and don't severely limit my convienience.
If your lab isn't exposed directly to the internet, at the very least update your servers from time to time, use a string root (admin users as well) password. That's more than enough.
If your lab is exposed, the same applies but update more often. Use SSH keys.
Don't go overboard - the majority of security incidents are from lack of basic security
UDM’s regular built in threat filtering, good firewall rules, updated services, and not opening up unnecessarily to the internet. And be vigilant but don’t worry too much about it. That’s it.
My security is basically if they get past an updated opnsense firewall I could be highly inconvenienced, but everything irreplaceable is backed up in the cloud and offline in my basement.
SSH shouldn’t be internet accessible
Changing an SSH port won’t stop someone more than 15 seconds.
Disabling ping is security through obscurity at best.
If someone or something malicious gets a shell account on my systems, then it at least stops them doing anything system-wide. And yes, if a script is going to request admin rights to do something, it'll stop right at the sudo prompt. Passwordless, it could do stuff without you even being aware of it.
Don't expose unnecessary things to the internet, keep any client PCs patched, use some sort of malware protection ... and that's all you need to do.
All these VLANs are such are just overkill unless you're actively exposing things to the internet. They wind up breaking really useful stuff, especially stuff that relies on multicast.
Besides, that Chinese IoT device can't get hacked if it's not open to the 'net in the first place.
If it's a Debian system, "Create user with sudo privileges" and "Disable root login" can be done during initial setup. Just leave the root password blank and it'll disable the root user and grant sudo permission to the regular user you create.
Create a separate management VLAN and use it for all your infra (web UIs of all your networking hardware, Proxmox, SSH for servers, etc).
For unattended upgrades, ensure the auto updaters are properly configured so they're used ONLY for bug and security fixes, nor for minor or major releases! Debian unattended-upgrades has good settings out-of-the-box but you may want to add any custom repos you're using. Make sure you have an email relay server configured in the Exim config, as it uses apt-listchanges to email the changelogs to you.
But above all, press the power button to turn it off and then never turn it on again. 100% unhackable.
Anything that faces the internet I have on a separate vlan. Each system on that vlan is treated as if it was facing the internet directly, that way if one of them gets compromised the hacker will not get far trying to get into any other machines.
Rest of my network is a little more tame just for ease of access since it's only me on here.
Although at some point I do want to revisit my security protocol even locally, just in case. Hitting some kind of drive by trojan script or something within the browser is always a possibility, it could work in reverse where it connects to an external server and then accesses the rest of the network that way. I'm not aware of such trojans but I'm sure it's possible.
I do block all outbound ports except for base internet ports but a properly written malicious script would probably take that into account and use a common port like 443.
At some point I might setup a honeypot. Just need to name the VM "cryptowallet" or something like that and it would be a very fast target. If access to it is detected it would alert me and shut off the internet.
Opnsense firewall at perimeter...and that's about it. Chances of anything getting in with no exposed ports is pretty slim so I don't really bother with anything more.
For SSH exposed servers/VPS I do change the port though. Cut down log noise & maybe dodge the odd portscanner or two
You have a good list to start with. Consider adding sshguard or fail2ban in the short term and crowdsec in the long term. Also use lynis on Unix systems and PingCastle on AD systems and see what suggestions those make. Just a few suggestions off the top of my head.
My homelab is in my garage - the storage array is the only thing I care about not losing so using ZFS encryption and Clevis + tang so it needs to be on the home network and able to contact the server to get the decryption keys.
from the internet side, I lock down ssh or administrative stuff to local network, and specific IPs, like work. inside my network, everything has a password to access, no defaults. vlans for specific use servers, etc.
Automatic updates and strong passwords. I know that automatic update can break a system but I’ve never had it break anything super critical in my home before that can’t be fixed with 10 minutes of effort. I can think of three things that have broken and required fixing in the last 5 years of auto updating software. I’d much rather have a broke piece of software than a security breach. To those that manually update, how fast after the patch notice are you patching? One day, two days, one week, monthly? What if you are sick or on vacation? I can guarantee mine updates within 24 hours every time.
If your homelab local only - well all of these are unnecessary if you're the only one who uses it. If you want to expose homelab to internet - you can pretty much use VPN to connect to your homelab without needing to expose whole homelab. Just a port to connect to VPN.
Inter-VLAN ACLs on core switch. This is a stateless firewall. Some VLANs with certain device types have inbound and outbound. Trusted devices only have inbound.
SPAN to Security Onion for all internal traffic.
SNMPv3 monitoring on all devices.
MAC Sticky on all camera ports because the cabling extends outside of the physical structure of the house. I am going to implement Dot1X at some point.
VRFs for sensitive infrastructure to prevent outbound routing completely.
A VRF for devices to be forced through an external VPN (Mullvad). Used for devices that do not support a VPN agent.
No antivirus. All antivirus is a botnet.
All server infrastructure is Devuan using OpenRC instead of systemd.
Gaming PC is Artix.
DNS blackhole.
Public DNS is a Swiss no-logging provider which I use DoT to send my queries to.
LibreWolf or Brave Browser on everything.
Only hole into the network is a 4096 bit encrypted Wireguard instance operating in a container using an uncommon port. I wrote a custom script that can reach into the container and pull from the API in order to show active sessions, GeoIP, browser fingerprints, length of time the socket has been open, etc.
I use geofencing for inbound connections to the Wireguard instance. I only allow my immediate area cellular ISPs IANA address spaces to touch my network. Same goes for the geographic area surrounding my parents house.
Unattended updates using custom scripting for my servers, including rebuilding the Wireguard container every single night, updating the server, and I also fire Nessus at it every night. If in the morning there is a CVE of note on that server, the NAT rule allowing traffic to the VPN is disabled at the perimeter until a sufficient patch is released.
I run STIGs on everything, within reason and where infrastructure allows, in my suite.
Is there really any security benefit to not using default ports? Especially if the service is not open externally? I cannot find any official documentation that states you should be doing that.
Only expose applications to the Internet if you have a good need to. Otherwise, use a VPN to access your home network and get to your applications that way.
If you are exposing them to the internet, take precautions. Use a reverse proxy. Use 2FA if the app supports it. Always use good, long passwords. Login as a limited user whenever possible, and disable admin users for services whenever possible. Consider an alternative solution for authentication, like Authentik. Consider using Fail2ban or Crowdsec to help mitigate the risks of brute force attacks or attacks by known bad actors. Consider the use of Cloudflare tunnels (there are plusses and minuses) to help mitigate the risk of DDOS attacks or to implement other security enhancements that can sit in front of the service.
What might be a good reason for exposing an application to the Internet? Perhaps you want to make it available to multiple people who you don't expect to all install VPN clients. Perhaps you want to use it from devices where you can't install one yourself, like a work desktop. This is why my Nextcloud and Calibre Web installs, plus an instance of Immich I'm test-driving, are reachable online.
But if the application only needs to be accessed by you, with devices you control, use a VPN. There are a number of ways to do this. I run a Wireguard server directly on my router, and it only took a few clicks to enable and configure in tandem with the router company's DDNS service. Tailscale makes VPN setup very easy with minimal setup as well. My NAS administration has no reason to be accessible over the internet. Neither does my Portainer instance. Or any device on my network I might want to SSH into. For all of that, I connect with the VPN first, and then connect to the service.
Enabling unattended updates -> Hell no. Regular Patchdays
Enable only ssh login with key -> yes
Create user with sudo privileges -> yes
Disable root login -> no
Enable ufw with necessary ports -> Basic iptables, but not on all hosts. But fail2ban
Disable ping -> nope
Change ssh default port 21 to something else. -> nope
Honestly I just use a good firewall and forward_auth/authelia in caddy (so authentication happens before any apps) and it works well.
I also don't expose SSH to the public internet anymore (more laziness than anything, have it semi-exposed in yggdrasil and wireguard) (mostly because the SSH logs get annoying for journalctl -f)