I see the delay each time a download is started in YaST. That delay could be mitigated if you start several downloads at the same time. They will all start, and you won’t have that delay just before they download, well, not more than one time. It would make it so much faster, especially for the very small 40KB and under pieces.
In fact I want to highlight the smaller downloads here, I think it would be great if you make a rule for very small downloads, and set 10 or even 40 of them at once, it would be more more time and bandwidth efficient!
Download after download, it’s not instantaneous as it goes from extremely tiny and fast downloading update, to the next tiny update. This is much less noticeable on larger 20+ MB updates, but when you have 30 KB updates one after the next, you’ll notice a two or even three second delay before it begins to download it. If you could somehow pool these tiny updates, and force them to all download at the same time, instead of one after the next, there wouldn’t be a small, “per update” starting download delay.
It doesn’t seem like much, but even half a second delay before a small update downloads is a LOT. You multiply that by 400, and that’s over three minutes of wasted time that could have been spend downloading larger updates. Think about how much time this would save during a net install.
While we’re on that topic, why does the net install download, and right after that, install every single package? That seems like an inefficient use of resources.
I’d noticed these libzypp(zypper and yast) behaviors a long time ago and thought a little about it then…
The pause you’re noticing for every file downloaded is the natural behavior of an http transfer, unlike other protocols, an http session is typically only as long as that single file transfer, and every time an additional file is transferred, a new session has to be set up complete with handshaking and possibly encryption. It should also be noted that an http transfer is relatively efficient only for tiny files, typically kb to maybe 2-3MB in size. The larger the file, the less efficient the transfer and although almost all packages are small, there are a few like kernel packages which are relatively enormous. Those packages take a long time to download not just because the packages are large, they’ll also take longer to transfer because of inefficiency.
So, maybe libzypp (which is the underlying transfer technology for both zypper and yast) should maybe consider ftp or some other protocol? I haven’t checked recently, is ftp already an option so we can specify in the repo URI? This could possibly make a significant diff eliminating a large number of session handshakes…
Consider though a major reason to favor http transfers is its easily understood way to configure around firewalls, proxies and other network obstacles. Any other protocol may require special knowledge and firewall filtering.
As for downloading all at once before installing vs downloading and installing one at a time, I’ve noticed that mixtures of this happens but I haven’t tried to figure out why the different behavior.
As for an initial installation, I haven’t investigated fully but I have theorized in my mind (without any further inspection) that a likely reason why LEAP installs extraordinarily quickly because a compressed image is downloaded, then extracted for install instead of downloading individual packages. If this is really happening (or should be considered for future releases), it would greatly minimize unnecessary session handshakes, minimize the bytes downloaded and would of course look like a complete download before installing.