Have ordered AWS snowcone service to upload 10TB data to S3, then downloaded directly from S3 to other site of my company, it takes about 200 bucks. We use borg to backup data and compress at high level before copying to snowcone, the original data might be 30TB.
Your router is exposed to public internet as long as it gets a public ip address. Domain is just an alias of ip easy to remember. Set strong policy on router will protects your local network on most scenarios.
What I did on self hosting is:
- Purchase a domain, add record pointing to my router's public ip.
- Expose ports for non-sensitive or authentication-capable application on home server. Those apps can be accessed from anywhere using public domain directly in browser.
- Deploy an OpenVPN server on home server, generate SSL certificates, install OpenVPN client and import certificates on my devices. Then set series of policies on router, to let data packets from OpenVPN's subnet can be routered to home server with certain ports. Whenever a sensitive app or app without login portal need to be accessed (from public internet), just start OpenVPN client at first.
- Make sure some critical apps could only be access from local network, even not for OpenVPN's subnet, such router's portal.
If you're bothered to tweak config on router, you could also use Cloudflare's tunnel, to manually add second level domain record for each service, if there are not many.
Besides, I use caddy to auto regenerate Let's Encrypt's certificate. It default requires that 80 port of you network is accessiable, not blocked by ISP. Or you can use dns verification in Let's Encrypt's config, just provide your domain provider's API credential to it.
Ok, it's typo for unit of ethtool, just correct it. But isn't it common to use MB/s refer to Megabytes per second and Mb/s for Megabits per second?
Hi guys,
I'm suffering from this problem for a few weeks, having no idea what to do now...
First, this is my Nas info:
- System: OMV5
- CPU: AMD Ryzen 3 5300GE
- Motherboard: Asus b450m PRO GAMING (2.5G)
- Drive: 4 x 2TB SSD
- Memory: 64GB
The thing is, I use 1G router in my home network, NAS is connected to it using cable, this works great, I can view any movie up to 80m/s on my phone using SMB or SFTP.
Weeks ago I run a script to update all docker containers on NAS, including qbittorent, jellyfin, sonarr and etc., and update system packages. Days later I found it stucks when I open a Blu-ray movie on my phone. So I checked network status (MXPlayer on phone) and it shows only 500kb/s. The problem solved by reboot NAS, but if I drag progress bar too quick or many times, it reproduces again, can't recover until reboot... I also found after downloading movies using qbittorrent for hours, it happens.
So I tried:
- Looking into the system log and other logs under /var/log/, but nothing related is reported in logs.
- Using different file sharing protocol, smb, ftp, sftp, ftps, when the problem happens, they all transfer at around 500kb/s.
- Looking into result of ethtool, it shows the network is at 1000M/s level.
- Use iperf3, the network from NAS can go up to 958MB/s both tx and rx, but 7MB/s when the problem reproduces.
- Copy a large file on drive A to and from drive B, average speed is 200Mb/s.
- Using netstat to see packet loss, but it shows 0% loss.
I'm currently using a stupid solution: script to reboot at mid night... But it interrupts a lot of task of my services.
So, is there any tool or way I can try to solve it? Thanks!!!!