So I am trying to track down what is possibly slowing down my download connection from my Debian server to my devices (streaming box, laptop, other servers, etc).
First let me go over my network infrastructure: OPNsense Firewall (Intel C3558R) <-10gb SFP+ DAC-> Managed Switch <-2.5gb RJ45-> Clients, 2.5gb AX Access Point, and Debian Server (Intel N100).
Under a 5 minute stress test between my laptop (2.5gb adapter plugged into switch) and the Debian Server (2.5gb Intel I226-V NIC), I get the full bandwidth when uploading however when downloading it tops out around 300-400mbps. The download speed does not fair any better when connecting to the AX access point, with upload dropping to around 500mbps. File transfers between the server and my laptop are also approximately 300mbps. And yes, I manually disabled the wifi card when testing over ethernet. Speed tests to the outside servers reflect approximately 800/20mbps (on an 800mbps plan).
Fearing that the traffic may be running through OPNsense and that my firewall was struggling to handle the traffic, I disconnected the DAC cable and reran the test just through the switch. No change in results.
Using iperf3 results in 2.5gb of bandwidth. SSD should not be a bottleneck, the server only has NVME storage and the laptop SSD is located in the SoC. Both far exceeding the network speeds. Traceroute indicated just a single hop to the server.
Ah, right, read to fast it seems! Though that still leaves the possibility of software firewalls, but any OOTB ones wouldn't be doing any packet inspection.
Just attempted that, odd thing happened was that both evened out on the reverse test at ~800Mbp/s. So higher than the download test before and lower on the upload. Conducted iperf3 tests and that shows the 2.5gb bandwidth so I retried file sharing. Samba refused to work for whatever reason on Debian so I conducted a SCP transfer and after a few tests of a 6.3GB video file, I averaged around 500mbps (highs of around 800mbp/s and lows of around 270mbp/s).
SCP encrypts your traffic before sending it, so it might be CPU/RAM bottleneck. You can try with different cypher or different compression levels, which are defined in your .ssh/config file.
Try switching to bbr for congestion control, and adjust the buffer sizes. The defaults are good for Gigabit but not really for higher speeds. Not near my computer right now so I can't grab a copy of my sysctl settings, but searching Google for "Linux TCP buffer size tuning" and "Linux enable bbr" should find some useful info.
If the devices are different speeds (eg one system is 2.5Gbps but another is 1Gbps), try enabling flow control on the switch, if it's a managed switch.
I mean, compared to what it should be, it is. Especially when I paid for 2.5gb infrastructure.
And it also affects how fast I can pull files from my server. Trying to get some shows downloaded to my laptop before a business trip, guess better prepare for an hour or two copy over LAN. Pulling a backup OS image for my devices? Going to wait for a while.