I can't tell you how many channels have disappeared and been memory-holed. Especially since censorship went into overdrive around 2019.
Data hoarders can show you how the world was before all that happened.
Yes and no.
Yes if you have the resources to monitor and update. Companies have entire teams dedicated to this.
No if you don't have the resources/time to keep up with it regularly.
IMO, no need to take this risk when you have services like Tailscale available today.
It’s not cheap to operate a business in Canada
Really depends on the gear. Some of it you can, some you can't.
Check IPMI and see if you can adjust it there. For mine, I use the IPMI plugin that you can get for Unraid. So some OS's do have solutions. But depends on your OS and hardware.
You can replace fans, usually. But need to make sure you still have enough airflow for the equipment.
I did replace my server grade fans with the Noctua Industrial. They are still a bit loud for being in the same room, but quiet one room over. Where before you could hear it in any room of the house.
My goal was quiet one room over.
Really depends on your use case.
Unraid is wonderful and easy to use. But really has two reasons to use it:Unraid Array fits your file storage strategy. (few writes, mainly reads)You want an EZ way to get into docker and use the unraid appstore.
Other than that, you can probably find everything on Debian or Ubuntu. (I prefer Debian for services)
You can add one more:
Proxmox and then run a Debian VM for docker, for example, and compartmentalize other things you may want to run.
Also, download a mem test utility and run it overnight to test your hardware.
I started out self hosting everything... 20 years ago or so. Then I got swept up in the "cloud" movement and put so many things into the "cloud".
Today I'm reverting back to how I started, self hosting everything that I can.
Mainly privacy, but also because they keep changing and I don't want to have to worry about them.
I just didn't feel like it was 'my' data anymore.
I do this with ZFS using a Keyfile and a script that runs at boot to unlock/mount.
I put the keyfiles on a USB drive. (Make sure you have backups!) This USB drive is hidden, I won't go into details on how I did that, several ways to do that, you can get pretty creative.
If someone steals my server, they need to know where I hid my USB, or they won't be able to get to any of the encrypted datasets.
Can be safer. Can be worse.
A poorly configured self hosted vaultwarden can be a major security issue.
A properly configured one is arguable safer than hosting with a 3rd party. Lastpass taught me that one.
If you configure it to where it's not exposed to the web, and only accessed through a VPN, like Tailscale. It can be quite robust.
That sounds easy enough, but it creates a situation where I don't know what updates are important (security) and what updates are minor. So I have to read the release notes for each update and then decide if I need it to patch a security vulnerability.
Where with the other method, I know the update is likely critical.
For some those frequent updates are a +, for me it is not. So use what works best for you!
But right now I couldn't use opensense even if I wanted to, as it's FIPS non-compliant due to them still using the depreciated EOL OpenSSH 1.1.1, and no date set to move to v3
chuckle, butthurt downvotes but not one comment to dispute anything I said. Enjoy the depreciated OpenSSL without security updates.
No, I like pfsense because it has less frequent updates and is better documented.
Here is one of the better guides that helps you config much of what you are talking about:
https://nguvu.org/pfsense/pfsense-baseline-setup/
Plus, opensense gets most of their code from the work done by pfsense, and often have to wait on them to push the code. Just look at what happened with TLS 1.3
and a big part of the reason is taxes and regulations. People with $$ don’t care, but everyone in the bottom 75% really takes a big hit compared to their income.
self hosted git repository.
I setup gitea on my server and use it to track version changes of all my scripts.
And I use a combination of the wiki and .md (readme) files for howto's and any inventory I'm keeping, like IP addresses, CPU assignments etc.
But mainly it's all in .md formatted with markdown.
I do this at the file system level, not the file level, using zfs.
Unless the container has a database, I use zfs snapshots. If it has a database, my script dumps the database first and then does a ZFS snapshot. Then that snapshot is sent via sanoid to a zfs disk that is in a different backup pool.
This is a block level backup, so it only backs up the actual data blocks that changed.
I don't use photoprism, but have experienced similar in other docker containers. What is most likely happening is that something, like headers/ports, needs to be forwarded by NPM, usually b adding additional config to the "advanced" tab in NPM.
Sorry, I'm not familiar enough with photoprism to know what exactly needs to be added to the config, but I since nobody has replied, I thought it might at least give you a direction to search in.