I don’t know that understanding the limitations of the bandwidth in a data bus or network is a matter of “opinion”.
It’s not feasible to move 100 GB of data over a 56KB link in 30 seconds (as an extreme example). That’s a physical limitation, as I’m sure you’d agree.
Years ago, I was involved in the architecture of a backup system that worked over a microwave link between two buildings. We ran into a bandwidth issue that resulted from a lack of understanding of how the software (IBM’s ADSM) would use bandwidth, and how the multichannel microwave link would allocate additional bandwidth (IIRC, there were 24 1.5 Mbps channels available).
ADSM would determine what the connection speed was between the device being backed up and the server, and would throttle itself to 80% of the bandwidth available.
The microwave link wouldn’t bond another channel until 85% (as I recall) of the bandwidth on the current channel was used.
So between the two thresholds, restoring all the data in the event of a catastrophic failure would have taken about 6 weeks without tweaks to the configuration of the microwave link (IIRC, the ADSM configuration didn’t have an option to use more bandwidth).
No amount of wishful thinking or “different opinions” would have sped up that restore operation without configuration changes, and even then, the full restoration time wasn’t reduced to fewer than several days (which would’ve cost millions of dollars in downtime).
Similarly, trying to do a Windows 2000 Server domain controller installation over a 56K MMV VSAT satellite link would also have taken multiple days, if the latency hadn’t been a primary issue. Again, not a matter of opinion, but a fact based on the bandwidth and latency limitations of the technology at the time.
It’s important for the readers here to understand those limitations - and that your specific results may not reflect their experience because their hardware is different than yours. Setting an expectation that your experiences are valid for everyone sets others up for failure.