But it's still not a backup strategy. You should always have a second partition on the server that you sync your data over to. For performance reasons this partition should be on the same RAID 0 array.
I prefer RAID -1, which is like RAID 0 except that you routinely yank one of the drives so that only the fittest of the bits survive, greatly improving the quality of your data!
Pro tip: Copying a Postgres database while live transactions are interacting with it frequently results in a corrupt backup.
Thankfully I test my Luanti backups.
Edit: I should clarify - My dangerous backup method was a naive file copy. I'm sure there's a different correct way to do a live backup. I just haven't checked into it yet, since stopping my Luanti server for a backup is no big deal.
Choosing to not use something is not a good way to learn how to use it.
In this particular instance, the lesson is to either a) use the provided database tools (e.g. pg_dumpall) for live backup, or b) bring the database cluster down before you backup the raw data folder.
RAID with parity is technically a backup, just a mostly ineffective one. It’s a backup that allows you to recover from exactly one scenario, single (or double) device hardware failure.
But I definitely understand the mantra “RAID is not a backup”. It’s not what most people think of when they say “backup”.
I have read these bullshit self-help self-affirmation and similar posts so often, my brain refuses to read anything related anymore. Had to do like 3 takes until I noticed.
I recently changed my hosting provider and wanted to install the new server from the backups I've been creating daily for the last four years. Well, it turned out the backup process got stuck on a lock file in July, 2021. And the email process that should have notified me was broken too. I was so happy I didn't find this out in an emergency and spent more time testing the email notification this time.