I'd buy the highest memory GPU I could get my hands on and slap it in my computer. I'd be playing with AI because it's probably going to replace us all in the not too distant future.
People are probably going to be like "wELl Ai HaLOOOsiNaTes or GetS ThiNgs WRonG". Yep, and so do people. We also had vacuum tubes and literal bugs before we had transistors and metaphorical bugs. This isn't a steady march to computers everywhere. This is a sprint to see who replaces all thinking work with AI agents first. The controller of the most successful agents will own the labor force.
So, either learn to build and repair the looms or become a luddite. Focus your lab money on AI.
UDM-PRO, USW-Aggregation, USW-Enterprise-24-POE, U6-LR… build a server with i5/32GB NVMe boot drive, then some RAID drives… I took out a loan in this scenario as $1,000 wouldn’t cover my entire rack getting blown up.
Depends on the requirements. Is the purpose to learn virtualization management? Linux sysadmin stuff? Virtual networking + firewalls? For my purposes it’s all of the above and more.
Having said that, I have not had an ounce of trouble out of Intel NUC 12 Pro NUC12WSHv5. So for $1000 I’d start with that and add NVMe storage and max ram in my budget. Running ESXi 8.
All used: 2019ish Intel NUC i7, 32-64GB RAM, run ESXi 7, 4 Bay QNAP or Synology with a Celeron, 8TB spinners, TP-Link ER605, an Omada POE switch, and an Omada AP.
You end up with a great setup for VMs, a reliable Plex server using the NAS CPU, multi-WAN, rock solid VPN, and a UniFi/Meraki like experience, and you don’t notice it on the electric bill, your ears, the shelf, or the room temperature.
This doesn’t differ at all from my existing setup. My only regret was not starting with 64GB of RAM on the NUC instead of the 32GB I started with.
I will just get a nice amd board with ipmi and dump a good Ryzen cpu, Any linux, be debian or any Rhel based distro or even Proxmox and tons of drives plus a few nvme raids. Pretty much about that
Wish I had skipped the Frankenstein and mini PC steps.
Here's two reasons enterprise servers are the way to go:
Remote management is awesome. Remote KVM, remote serial terminal, mounting ISOs remotely. If your homelab is in a not-so-accessible place (e.g. cupboard or garage), this saves so much frustration.
High quality rack rails. You're more likely to be tinkering around the back of your server than a company that throws it in a data centre. It's almost like rack rails were built for homelabs.
I wouldn't worry too much about noise. $1000 will easily get you an R730 or T630.
I have a rack full of R710s that barely get used anymore because energy is so freaking expensive. I’d either do everything in the cloud or use lots of low powered machines at home.
I would do pretty much what I do now with two mini PCs and my desktop PC running background services in a three node cluster. I change my mind too often though and just did a bit of a rebuild over the holiday, so by next weekend I may have a completely different goal.
I having considered replacing the desktop with a laptop for more portability.
I would also not mind getting a 2.5 Gbps switch. I have all 2.5 Gbps devices on the network except the switch which is a little silly.
Will do almost what i have now: compact (ITX/mATX) board with C612/2600v3/v4, maxxed with memory. SAS board/NVME/10G if you want/need. Silent and efficient for 24/7
I would buy a second hand workstation with all the pcie slots I could. They are bargains, and you can pull / upgrade cpus as needed. Need more ram? Put the second cpu in. Don't need it? Pull it out.
Dell Poweredge budget server. R720 can have good specs for cheap on eBay. Get a ubiquiti switch for vlans. Firewall brand of your choice I did tz400w. You should have some money left over to buy an endpoint as well. Then install VMWare and build out a vm environment of your choice. I chose windows just to continue learning the systems I administer.
Had to do this when I moved countries. Went from multiple HP Microservers down to a 2014 Mac Mini that handles TimeMachine backups and my photos and a Lenovo M93p that's been upgraded as far as it can go with a few terabytes of external storage. Potent enough to run the odd VM when I need to test something, and comfortably runs Docker for HomeBridge, Phoscon, and file shares.
Dedicated router hardware with your os of choice, 2 hp desktop minis (or equivalent) for virtualisation and some sort of harddrive for a Nas that you can scale as required.
A couple of gen9 Proliant servers. They're cheap, easy to source, plenty powerful for a homelab, have surprisingly good power management, and they're much quieter than previous generations (because of the power management). If you go with LFF drives, you can find surplus ones which have plenty of room for homelab stuff. SAS drives are so cheap, I've bought enough extra drives to replace any which fail.
So total cost for one of these servers with 128GB memory and four 8TB (24TB usable with RAID 5) drives would be $463.48. You could spin up two of them for less than your $1,000 budget and be able to do a BUNCH of cool stuff with them. Or you could just pack one with like 512GB memory and do everything on one server with virtual machines.
On my gen 9 DL380s with 12 4TB drives, I'm getting ridiculous disk speeds:
[root@neuromancer vms]# dd if=/dev/zero of=bigfile bs=16M count=1024 oflag=direct status=progress
16475226112 bytes (16 GB, 15 GiB) copied, 10 s, 1.6 GB/s
1024+0 records in
1024+0 records out
17179869184 bytes (17 GB, 16 GiB) copied, 10.3636 s, 1.7 GB/s
So over a gig and a half per second direct I/O writes. I spin up VMs on these servers in literally minutes, and I've got enough memory to have dozens of virtual machines. I have RHEL, Fedora, and Windows machines (my wife is a Microsoft sysadmin, she tests stuff on those).
The downside is that even with good power management, they do draw a fair amount of power and generate a fair amount of heat. I have three of these in my home office, and during the summer, it kept my office slightly warmer than I like.
For the OS, I use the free developer edition of RHEL - those skills are very marketable. https://developers.redhat.com/. I use RHEL for my VMs so I can play with stuff like NFS services, the automounter, user management, even stuff like OpenShift cluster members as VMs. I've learned a lot using my homelab, and it's helped my career a lot.
Buy a new N100 microPC with 32GB RAM and m.2 drive, Sabrent 5-bay USB_3 DAS, a couple 10TB drives. Easy low-power single box home server with room to expand. You also could add a good switch and box of Cat8 (cable always > WiFi)
My suggestion is invest in networking equipment but it will not cost you $1000. Maybe a switch and a couple mini PC's and if you have to buy used retail it's maybe $200. If you want to get into NAS and streaming than you're looking at spending some money because reliable, preferably fast storage is a must and expensive
I got an enterprise class 19" short depth chassis, whit supermicro motherboard and a xeon D (they are soldered to motherboard) whit 8 cores at 2.something GHz, multithreading and so on. Bought a 128GB ecc ram kit and a pair of intel enterprise 1TB ssds. Installed proxmox, whit mirrored discs, and it's now running 8 containers and 3 vms. Really low power consumption, just a bit loud but perfect for the garage. Placed inside an ikea lack table and mounted up above a door. Avoid buying consumaer class ssds as they are gonna last you only a few months in a configuration like this (that comes from experience, 20% wear in 6 months whit the initial Kingston I bought)
Honest question... Why people with knoedge on how to do one, buy a Nas like synology? Are you not just paying double or triple for the same result you could have if making the NAS from scratch?
I bought a qnap a long time ago, never again...it was like 3k with disk for 6 x 6TB drives like 10 years ago. They constantly get hacked, a bunch of their NAS's were getting crypto lockered because some Dev hard coded an admin password iirc. their software does a bunch of shit I dont need and it runs like shit now with just me using it. I'm gonna reset it soon once I get my data off.
My NAS now is a r730xd with 12 x 12tb drives in it running true nas. Granted my electric bill is a car payment with all my stuff, it only cost me like 1,500 for disk and the server was super cheap and has a 10 gig connection.
Granted some of it is cool if you are still learning like 1 click and you can have a mysql php server on there ect. I thought about getting a synology but all the bells and whistles it can do with apps and that I can just run on a real server.
I'm still a beginner at it, but I would say to not over prioritize cores. Ram will be your bottleneck first. I day this as someone with 36 physical cores and like 90% of them idle
At $dayjob, we use 4 GB per core for application workloads and it works well. Databases get 16 GB per core. Memcached gets 32 GB per core. In development we use 16 GB per core because there isn't heavy load.
My own homelab is built around a bunch of quad cores with 32 GB of memory. The memory has come in useful. Having 64 GB per quad core would be even better, but was not possible when I built the systems many years ago (I bought super cheap $40 motherboards with only two slots). For my initial purpose getting 2x 1 GB sticks would have been enough, but I'm glad I bought more as I use all the memory now.
If you don't know what you want to do, I would get 8 GB of memory per core at minimum, and in a lightly loaded homelab, 16 GB per core is totally reasonable. I would only get less memory if you know you're going to hit the CPUs hard with particular tasks that share memory or use little memory, and even then I would get minimum 4 GB per core.
N5105 nas board, 32-64gb of ram, 1x 500gb nvme SSD, some sort of case, and a bunch of HDDs, I like the 8tb ironwolfs, they are cheap enough, but large enough.
Maybe the n6005 if you can find it. But it's a great server, handles most selfhost stuff. I run Ubuntu server on it, it's just the cleanest and easiest to use, no GUI needed.
What's nice is it's super low power, and cheap. So you can eventually migrate to a more powerful Proxmox server, on minipcs, like NAB6, than just turn the n5105 into a TrueNAS server, and even duplicate it for backups, and triplicate (if you are really feeling it), for redundancy. Getting a 2nd and 3rd Proxmox minipcs enables HA on VMs. So yea. That's my goal. ATM I gotta migrate to the Proxmox.
Bought a Dell R630 from ebay for a decent price, but I wish I've had spend more on larger capacity hard drives. I bought a bunch of old 600GB HDD running RAID 10 that right now im afraid to replace them.
At least 2 mini desktops with as much RAM and ssd that I can get I'm it. Running proxmox and truenas and then setting up my jellyfin, homeassistant, and the rest will be a playground. I am a simple man
can you make ZFS pools across devices with Proxmox? Otherwise idk what you do for storage redundancy or RAID unless you run like longhorn or ceph or something across the cluster - all those machines have a single drive
Everyone here recommending tinylabs, but what if you need lots of TB's? Is there a solution then? I have a Microserver Gen 8 (which is plenty powerful enough) but need way more space, and was going to buy something that can fit 10+ Hard drives...