WTF! over 5000 updates to apply. I run updates daily so it’s not cumulative.
It wouldn’t be so bad if I had a fast broadband link but I’m trying to update a couple of laptops over a SIM wifi and it’s taken a day for each. Not to mention I’ve had to use zypper dup from the CLI and watch the output as the connection is flaky so I need to hit retry every now and again as I lose signal.
OK, so I’m sure it’s necessary, I’d just like to know why.
Well, it’s a rolling release distro, so not sure why you’re surprised.
Best to stay across the mailing lists…
BTW:
erlangen:~ # journalctl -b -1 -u dup -g 'Start|Consumed|following'
Mar 23 00:00:00 erlangen systemd[1]: Starting Distribution Upgrade...
Mar 23 00:00:00 erlangen systemd[1]: Started Distribution Upgrade.
Mar 23 00:00:50 erlangen zypper[17330]: The following 3354 packages are going to be upgraded:
Mar 23 00:00:50 erlangen zypper[17330]: The following 8 patterns are going to be upgraded:
Mar 23 00:00:50 erlangen zypper[17330]: The following product is going to be upgraded:
Mar 23 00:00:50 erlangen zypper[17330]: The following 7 packages are going to be downgraded:
Mar 23 00:00:50 erlangen zypper[17330]: The following 11 NEW packages are going to be installed:
Mar 23 00:00:50 erlangen zypper[17330]: The following NEW pattern is going to be installed:
Mar 23 00:00:50 erlangen zypper[17330]: The following 2 packages are going to be REMOVED:
Mar 23 00:00:50 erlangen zypper[17330]: The following package requires a system reboot:
Mar 23 03:44:46 erlangen systemd[1]: dup.service: Consumed 7min 18.557s CPU time.
erlangen:~ #
Download was slow due to some 80 curl errors. However zypper retried and finally succeeded without manual interaction. Upgrading was no big deal.
I switched manually to a local mirror in germany and the download speed was fast as everytime. The main download servers where maybe a little bit stressed due to the high amount of Tumbleweed installations and updates…
The same experience on a remote host:
burgberg:~ # journalctl -b -3 -u dup -g 'Consumed|Start|folgende'
Mar 22 18:22:46 burgberg systemd[1]: Started Dist Upgrade Download.
Mar 22 18:23:42 burgberg zypper[1345]: Die folgenden 2423 Pakete werden aktualisiert:
Mar 22 18:23:42 burgberg zypper[1345]: Die folgenden 7 Schemata werden aktualisiert:
Mar 22 18:23:42 burgberg zypper[1345]: Das folgende Produkt wird aktualisiert:
Mar 22 18:23:42 burgberg zypper[1345]: Das folgende Paket wird durch eine ältere Version ausgetauscht:
Mar 22 18:23:42 burgberg zypper[1345]: Die folgenden 10 NEUEN Pakete werden installiert:
Mar 22 18:23:42 burgberg zypper[1345]: Das folgende NEUE Schema wird installiert:
Mar 22 18:23:42 burgberg zypper[1345]: Die folgenden 5 Pakete werden GELÖSCHT:
Mar 22 18:23:42 burgberg zypper[1345]: Das folgende Paket erfordert einen Systemneustart:
Mar 22 21:02:13 burgberg zypper[1345]: Es werden Programme ausgeführt, die immer noch die durch kürzliche Upgrades gelöschten oder aktualisierten Dateien oder Bibliotheken verwenden. Starten Sie die Programme neu, um die Aktualisierung>
Mar 22 21:02:13 burgberg systemd[1]: dup.service: Consumed 10min 40.220s CPU time.
burgberg:~ #
Host burgberg uses VDSL2 17a G.Vector with 71,27 Mbit/s download speed, while host erlangen uses FTTB with 27,5 Mbit/s. Nominal speed doesn’t matter here.
BTW: The user of burgberg was logged in half an hour only and pressed power off when done. As zypper dup runs in a systemd service shutting down was postponed some two hours until dup.service terminated successfully. The user wasn’t even aware of what had happened.
Yep. If in a hurry you always can try this. In the morning of March 23 I even did not notice that host erlangen had been successfully upgraded at midnight despite dozens of download errors.
@fudokai: host erlangen is configured to keep downloaded packages:
erlangen:~ # du -hd1 /var/cache/zypp
65M /var/cache/zypp/solv
24G /var/cache/zypp/packages
80M /var/cache/zypp/raw
20K /var/cache/zypp/pubkeys
4.0K /var/cache/zypp/geoip.d
25G /var/cache/zypp
erlangen:~ #
You may export /var/cache/zypp and reuse it on other hosts.
I know it’s a rolling release but I can’t remember seeing an update anywhere near size before
There where also the same complains for update to gcc12…
It’s always surprising that somebody gets surprised by an update of an rolling release distro…
And please take a seat and prepare yourself ahead: in some month the complete distribution will be rebuild to gcc14 and the same amount of packages will be updated!
I do not use Tumbleweed but Leap and thus do not experience these large upgrades. I am always amused to read these threads again and again, because they stiffen me in the idea that, at least for me, Leap is the better choice. Do I get the idea that for some others it also might be the better choice?
I used “stable” distros for over 2 decades and got finaly annoyed that they are always lacking behind the actual technical development. That’s why i switched my main computer (and two other machines) to Tumbleweed. And there was no surprise or anything else…because if you are using linux for such a long time you understand the background and the reasons why it is why it is. And then you don’t lament because of some 1000 packages that need an update…because for everybody with a base understanding of programming it is clear that you need to recompile all packages when you use a new compiler version of gcc.
Users of Leap don’t see this, because these several 1000 packages are hidden in the update process when they switch to the next release version. So they don’t notice that there was an update of gcc…
And i also don’t understand why some complain about 2000 or 3000 packages to update…starting a new forum thread and crying and lamenting takes more time then the upgrade process itself…
And because that upgrade is planned carefully, it is of course never a surprise that it involves virtually all packages.
@fudokai Aside from a normal size update due to the gcc change/rebuild, it also put a strain on the mirroring infrastructure (which was having it’s own issues) hence seeing the connection issues, a cascade effect…
Possibly because, their Internet Service Provider (ISP) doesn’t offer enough bandwidth to support the rolling updates …
- Yes, there are locations on this planet where a rolling distribution isn’t feasible – because of network bandwidth limitations …
So the question is: why do admins who know that their infrastructure isn’t capable of supporting a rolling release, install it anyway and later start crying about…
…it’s over my head…
With a big let me rephrase:
So the question is: why do users who know that their infrastructure isn’t capable of supporting a rolling release, install it anyway and later start crying about…
…it’s over my head…
Rolling release is not a synonym for “rebuild the whole distribution every couple of weeks”. This comes as a surprise to every new Tumbleweed user.
Then the user didn’t do a basic research of the disadvantages of a rolling release distribution (simply google “rolling release vs fixed release”). This is common between all RR distributions, that the daily amount of packages to upgrade differs between a handfull and several thousands…
I need to quote myself again:
Every couple of weeks? If you are going to exaggerate, why not say every couple of days? Neither is true, but latter is much scarier and better suits your apparent purpose.
Last year there were several full rebuilds in one month.
And if memory serves me correctly that was because of an error. Surely your not saying that the proper course of action would have been not to correct it?