Hello everyone. I'm going to build a new PC soon and I'm trying to maximize its reliability all I can. I'm using Debian Bookworm. I have a 1TB M2 SSD to boot on and a 4TB SATA SSD for storage. My goal is for the computer to last at least 10 years. It's for personal use and work, playing games, making games, programming, drawing, 3d modelling etc.
I've been reading on filesystems and it seems like the best ones to preserve data if anything is lost or corrupted or went through a power outage are BTRFS and ZFS. However I've also read they have stability issues, unlike Ext4. It seems like a tradeoff then?
I've read that most of BTRFS's stability issues come from trying to do RAID5/6 on it, which I'll never do. Is everything else good enough? ZFS's stability issues seem to mostly come from it having out-of-tree kernel modules, but how much of a problem is this in real-life use?
So far I've been thinking of using BTRFS for the boot drive and ZFS for the storage drive. But maybe it's better to use BTRFS for both? I'll of course keep backups but I would still like to ensure I'll have to deal with stuff breaking as little as possible.
If you're not intending to use complicated RAID setups, just go with btrfs. There is no reason to bother with zfs given your specs and needs.
Do not go with ext4. Unlike both btrfs and zfs, ext4 does not do data checksumming, meaning it cannot detect bit rot (and obviously cannot fix it either). You'll also be missing out on other modern features, like compression and copy-on-write and all the benefits that entails. Once you start using snapshots for incremental backups using btrfs send (or its zfs equivalent), you'll never want to go back. Recommended script: snap-sync.
ext4 + mdadm + dm-integrity would solve the bit rot problem. But you'd end up with a lot of parts bolted together and still miss out on the features that btrfs/zfs provide.
Almost all data, aside from stuff like databases, benefits from filesystem-level compression, and almost every user benefits from having snapshots. Snapshots have saved my ass so many times, e.g. when I accidentally delete a file I shouldn't have, or when a program has overwritten a file it shouldn't have, or when Crusader Kings 3 corrupts my savegame.
As for bitrot, I frankly don't know if btrfs has an automatic mechanism of fixing rotten files from an external backup of the filesystem (created using btrfs send), but even if it doesn't it'll tell you what has rotted so you can restore the files manually.
Systems like DSM from synology have integrity scrubbing to fight bit rot. Run it once a quarter on a 4 drive, and you're protected. Plus, you should be backing up your data, even with RAID.
BTRFS, last I read, had lots of complexity and weird problems. Hopefully, it has improved since then (about 3 years ago).
Btrfs can be a little complex and needs more user-friendly tooling for some of the advanced features to be useful to "laymen", but OP seems technical enough (the fact that he cares about what filesystem he's running in the first place is an indicator of this) that this should not be an issue.
As for "weird problems", the majority of those seems to come down to users using advanced features without RTFM, and users having underlying system issues that cause issues that btrfs catches early and refuses to mount the filesystem as RW, and the users then blame btrfs for the issue.
This for sure. As a general rule of thumb, I use XFS for RPM-based distros like Red Hat and SuSE, EXT4 for Debian-based.
I use ZFS if I need to do software RAID and I avoid BTRFS like the plague. BTRFS requires a lot of hand holding in the form of maintenance which is far from intuitive and I expect better from a modern filesystem (especially when there are others that do the same job hassle free). I have had FS-related issues on BTRFS systems more than any other purely because of issues with how it handles data and metadata.
In saying all that, if your data is valuable then ensure you do back it up and you won’t need to worry about failures so much.
ZFS is great but I wouldn't recommend it for single volume setups. I've never lost data with it but the parity has always been the saviour.
Never used BTRFS.
I avoid XFS due to performance reasons as most my systems are comprised of many smaller files which XFS isn't great for. But the usage I've had with it, it's been great
EXT4 is always my go-to for normal usage. Unless I need to be supporting older machines then it's ext2/3.
OpenSUSE, both Leap and Tumbleweed, use btrfs by default. Do you switch those to xfs during installation?
I’ve had btrfs snapshots pull me out of the fire multiple times on my home machines, but I don’t fully trust any file system at all, so I rsync stuff to two local network destinations and an off-site location as well. Those, too, have come in handy.
I’m assuming you don’t want to tinker with things? I’m also assuming you do not have experience with things like ZFS. So….
Unless you’re running multiple drives (or special options) zfs & btrfs aren’t going to give you much. For instance btrfs (unless it’s set to DUP) isn’t going to protect from bitrot or other data corruption. Same goes for ZFS. It will throw an error when something doesn’t match the checksum though.
Your best option is to either use ext4 or xfs for your 4tb storage. If you’re working with a lot of large files xfs has some advantages but overall you’re not going to notice that much of a difference in your uses.
For your ssd, btrfs has the advantage over ext4 and xfs. Although so does f2fs. In practical uses for what you’re describing it’s not going to make that much of a difference.
Unless you have a specific reason to use something other than ext4 then just stick with that. It’s simple and just works. Make sure you’re keeping backups eg restic, borg, rsync, duplicity, etc etc. and follow the 3,2,1 rule where possible and you’ll be fine.
If it were me setting up that system I’d mirror the drives and use btrfs. Which is pretty much what I did on my PC. But that double the costs of storage.
The only place (at home) I use ZFS is on my NAS. I have Rocky8 setup and all it does is handle storage. I use mirrored pairs on my important data and Z1 on everything else. But that’s a topic for another post
If you REALLY want some of the features of zfs or btrfs eg snapshots, I’d lean on your backup software for that but if you can use LVM to take snapshots in a similar fashion. See STRAIS for another example too. However that’s beyond the scope of this post.
Thanks for the help. Both of my drives are SSDs, the boot drive is M2 and the storage is SATA. I've heard filesystems that support compression would be better for their health and lifespan as they'd have to write less. But yes, no matter what, I will keep constant backups. Snapshots would be appreciated, but since I'll run Debian I don't think they'd be that necessary, if to have them there's a lot of problems to deal with in exchange.
Since both drives are SSDs there’s nothing really stopping you from using BTRFS. You are correct that the features for BTRFS are better for the long term health of your SSDs and if you feel comfortable with it then you should 100% use it. That being said with todays SSDs the life span extending features of BTRFS, F2FS, etc are going to be minimal at best. So don’t stress too much over running it or not. Just use whatever you’re most comfortable with.
A lot of these responses seem.. dated. There's a reason TruNAS and such use ZFS now.
I would recommend ZFS 100%. The copy-on-write (allowing you to recover almost anything), simple snapshots, direct disk encryption, and ability to not only check the file system, but tell you exactly which file has an issue if there is an error, make it an easy choice even if its a one-disk system.
Personally I use date times for my snapshot names, and delete old ones as time goes on. Its fabulous for backups.
Well yeah, ZFS is absolutely fantastic for a NAS, but it's complete overkill for a desktop. That's why I recommend BTRFS for a desktop like this, and still recommend ZFS if you're building a NAS (mine also uses BTRFS, but that's because I don't need the features and would rather only deal with one FS).
How do you typically recover things on zfs vs btrfs? Also, is the out-of-tree kernel modules thing something you have to deal with or take into account?
The out-of-tree thing is annoying, but most distros have zfs support as modules already.
I've never had to worry about zfs recovery when it wasn't a raid, it seems to be automatic, but you have zpool checkpoints, scrubs, snapshots, really a ton of ways to go back to a working state, and you can also try to recover bad files if you use the right techniques.
I recommend using whatever is the "least hands-on" option for your boot drive, a.k.a your distro default (ext4 for Debian). In my admittedly incompetent experience, the most likely cause for filesystem corruption is trying to mess with things, like resizing partitions. If you use your distro installer to set up your boot drive and then don't mess with it, I think you'll be fine with whatever the default is. You should still take backups through whatever medium(s) and format(s) make sense for your use case, as random mishaps are still a thing no matter what filesystem you use.
Are you planning on dualbooting Windows for games? I use https://github.com/maharmstone/btrfs to mount a shared BTRFS drive that contains my Proton-based Steam library in case I need to run one of those games on Windows for whatever reason. I've personally experienced BTRFS corruption a few times due to the aforementioned incompetence, but I try to avoid keeping anything important on my games drive to limit the fallout when that does occur. Additionally if you're looking to keep non-game content on the storage drive (likely if you're doing 3D modeling work) this may not be as safe.
I don't plan on installing Windows at all. The only thing I'd do in my boot drive is have a separate home partition, I won't really do anything else though. Did the corruption you experience happened just on its own? Or was it something you did?
Id go with your distro default which I think is ext4 for most distros and do proper backups/data management (which might include a Nas running zfs--so you get the best of both worlds).
Depending on your data it might be small and not need a Nas necessarily. Things like code could go up on GitHub or gitlab. Games themselves can always be redownloaded etc. If your data is small enough, cloud storage isn't too pricey.
One of the best things you can do for a PC is getting a solid true sine wave battery backup that will let you weather electricity fluctuations, surges, brownouts ,and give you time to shut down properly during an outage.
Distro defaults are chosen for general purpose use and stability. For op's specific requirement, zfs, xfs, and btrfs are all definitely better. For the boot drive, I can understand going for the default since you just want it to be stable, but having some snapshotting and fault protection isn't a bad thing.
It's gonna be a hard decision to make. I know that because I read about Btrfs for about a whole week before deciding to switch to it. But, I'm a happy Btrfs user now for about 8 months, and I'll be honest with you, in my opinion, if your application does not mainly involve small random writes that'll make Btrfs inevitably fragment a ton, it is most likely good for any situation. I don't know much about the other modern/advanced filesystems like ZFS or XFS to tell you anything about them though
Been using BTRFS for several years and have never once had any sort of issue. i just choose BTRFS at system setup and never think about it again. I like that when I copy a file it is INSTANT and makes my computer feel super fast, whereas EXT4 can take several minutes to copy large files. This is with similar use to what you describe. No RAID.
ext4 is the tried and true file system. I use that for reliability. Btrfs is nice with a ton of modern features, but I have had some issues in the past, but they are pretty rare.
I use ext4 for my boot drive as that’s what Linux mint defaults to.
I do not do raids and use btrfs on my other drives.
You can turn on compression on write with btrfs which may reduce the amount of data being read and written to your drive which could further reduce its lifespan.
But you shouldn’t expect the drives to last 10 years.
They might, but don’t expect it and have a backup of whatever is important. Ideally you should have a local backup and a cloud based backup or at least an offsite backup somewhere else.
Yeah I'll always do backups. When I have the money I probably will buy another drive and try to do RAID1 on the two, just to be sure. But I do want them to last as much as possible.
BTRFS - easy, fast, reliable, snapshots, compression, usable RAID, CoW, online resizing...
ZFS - hard to get into, reliable, snapshots, compression, state of the art RAID, CoW....
Everything else, particularly Ext4 should be avoided. Your life will be a lot easier once you discover snapshotting and also how more robust and reliable BTRFS and ZFS are. I got into BTRFS a few years ago in order to survive power losses as I had issues with Ext3 and Ext4 regularly with that. My experience with Ext4 disks was always: if something goes slightly wrong your data is... puff gone.
I've been using ext4/btrfs for a long time,but recently I decided to give xfs a try and it feels pretty solid all rounder fs.
I know it's a very old and very well supported fs,developed by Silicon Graphics and has been getting constant improvements over time with various performance improvements andchecksuming. TBH,for my use casesanything would work but BTRFS snapshots were killing my storage and I got bored with the maintenance task.
Archwiki has amazing documentation for all FS,so might be worth a look.
Ten years is a long time. In ten years 4Tb storage will be less than a crappy thumb drive.
For reliant storage I personally would get two hdd for a price of one ssd, slap a software raid1 with ext4 on them and forget about them until the mdadm alerts
This might be controversial here. But if reliability is your biggest concern, you really can't go wrong with:
A proper hardware RAID controller
You want something with patrol read, supercapacitor- or battery-backed cache/NVRAM, and a fast enough chipset/memory to keep up with the underlying drives.
LVM with snapshots
Ext4 or XFS
A basic UPS that you can monitor with NUT to safely shut down your system during an outage.
I would probably stick with ext4 for boot and XFS for data. They are both super reliable, and both are usually close to tied for general-purpose performance on modern kernels.
That's what we do in enterprise land. Keep it simple. Use discrete hardware/software components that do one thing and do it well.
I had decade-old servers with similar setups that were installed with Ubuntu 8.04 and upgraded all the way through 18.04 with minimal issues (the GRUB2 migration being one of the bigger pains). Granted, they went through plenty of hard drives. But some even got increased capacity along the way (you just replace them one at a time and let the RAID resilver in-between).
Edit to add: The only gotcha you really have to worry about is properly aligning the filesystem to the underlying RAID geometry (if the RAID controller doesn't expose it to the OS for you). But that's more important with striping.
How many hardware RAID controllers have you had fail? I have had zero of 800 fail. And even if one did, the RAID metadata is stored on the last block of each drive. Pop in new card, select import, done.