I’ve run zypper dup and all seemed to be just fine until btrfsmaintenance 0.4 was being installed and got stuck at 100% for over 20 minutes now. I found a similar situation in a thread from March, but couldn’t find a solution. There doesn’t seem to be much hdd activity. I am using the default repositories plus science, non-oss, and packman. Has anyone had this problem? Any clue on what to do? Thanks in advance!
Current version is 0.4.1 with 0.4 available since January. When did you update? What was the previous version?
Any clue on what to do?
Well, you were in the best position to offer clues by examining what was running and which services were active at this time. Probably installation time coincided with timer expiration which started one of btrfsmaintenance services. You can examine logs to check what services were started at this time.
Thank you arvidjaar for your kind reply. You are correct, It was btrfsmaintenance 0.4.1 (I think I typed 0.4 because I was reading about a similar problem another user had with 0.4). Apparently there was a problem removing 0.4.1-1 and installing 0.4.1-2. After 8 hours (haha) I stopped zypper and tried again. This time everything rolled smoothly.
Thanks again.
Probably systemd hanging again. I found this when “init 3” hangs on btrfs commands, it’s actually systemd.
I am in this situation right now on my home computer.
Had 4000+ packages to update, ran zypper dup, now I’m stuck at applying package nr. 2772 (btrfsmaintenance-0.4.2-2.2.noarch) at 100%.
It has been sitting there all night long. I have 7Gb free in the root partition.
I see this (if it can be of any help):
polarisuse cris ~ ps -ef|grep -i zypper
root 22148 2846 0 00:34 pts/0 00:00:00 sudo zypper dup
root 22151 22148 0 00:34 pts/0 00:04:24 zypper dup
cris 32196 20570 0 09:25 pts/2 00:00:00 grep --color=auto -i zypper
polarisuse cris ~ ps -ef|grep 22151
root 18503 22151 0 01:13 pts/0 00:00:00 rpm --root / --dbpath /var/lib/rpm -U --percent --noglob --force --nodeps -- /var/cache/zypp/packages/repo-oss/noarch/btrfsmaintenance-0.4.2-2.2.noarch.rpm
root 22151 22148 0 00:34 pts/0 00:04:24 zypper dup
root 22213 22151 0 00:34 pts/0 00:00:00 /usr/bin/systemd-inhibit --what=sleep:shutdown:idle --who=zypp --mode=block --why=Zypp commit running. /usr/bin/cat
root 22215 22151 0 00:34 pts/0 00:00:00 /bin/bash /usr/lib/zypp/plugins/commit/btrfs-defrag-plugin.sh
root 22221 22151 0 00:34 pts/0 00:00:05 /usr/bin/python3 /usr/lib/zypp/plugins/commit/snapper.py
cris 32208 20570 0 09:25 pts/2 00:00:00 grep --color=auto 22151
Can someone help me getting out of this? Or at least investigate the cause?
I could interrupt the update and re-run ‘zypper dup’, but this way I would risk losing all the post-installation scripts that have been postponed, wouldn’t I?
The other option would be interrupting the update, rolling back to the previous snapshot and the re-applying the update again.
Which one would you advise?
Thank you in advance
Cris
I encountered this and did the following to resolve it:
- Roll back to last snapshot before the update.
- Use yast software management and block btrfsmaintenance from updating.
- Do the update.
- Reboot.
- Use yast software management and allow btrfsmainenance to update.
- Do an update.
Thank you doscott!
I will do as you describe and I’ll let you know about my experience.
Cris
Yes, I ran into that. I think many people did. It has been reported as Bug 1110259
This bug has been around for a while.
I see this (if it can be of any help):
polarisuse cris ~ ps -ef|grep -i zypper root 22148 2846 0 00:34 pts/0 00:00:00 sudo zypper dup root 22151 22148 0 00:34 pts/0 00:04:24 zypper dup cris 32196 20570 0 09:25 pts/2 00:00:00 grep --color=auto -i zypper polarisuse cris ~ ps -ef|grep 22151 root 18503 22151 0 01:13 pts/0 00:00:00 rpm --root / --dbpath /var/lib/rpm -U --percent --noglob --force --nodeps -- /var/cache/zypp/packages/repo-oss/noarch/btrfsmaintenance-0.4.2-2.2.noarch.rpm root 22151 22148 0 00:34 pts/0 00:04:24 zypper dup root 22213 22151 0 00:34 pts/0 00:00:00 /usr/bin/systemd-inhibit --what=sleep:shutdown:idle --who=zypp --mode=block --why=Zypp commit running. /usr/bin/cat root 22215 22151 0 00:34 pts/0 00:00:00 /bin/bash /usr/lib/zypp/plugins/commit/btrfs-defrag-plugin.sh root 22221 22151 0 00:34 pts/0 00:00:05 /usr/bin/python3 /usr/lib/zypp/plugins/commit/snapper.py cris 32208 20570 0 09:25 pts/2 00:00:00 grep --color=auto 22151
Yes, I did that.
From that output, “zypper” us running on terminal “pts/0”. So I then used:
ps -ft pts/0
to see what else was running there. So I found a process running a script. I killed that (with “kill -KILL”), and the update resumed.
I have added “btrfsmaintenance” to a small list of packages that I update separately. So, before running “zypper dup”, I now run
zypper up btrfsmaintenance
The idea here is that it is easier to deal with a problem for an isolated package update than in the middle of a large system update.
I see this:
polarisuse **cris ** **~ ** ps -ft pts/0
UID PID PPID C STIME TTY TIME CMD
cris 2846 2596 0 giu04 pts/0 00:00:00 /bin/bash
root 18503 22151 0 01:13 pts/0 00:00:00 rpm --root / --dbpath /var/lib/rpm -U --percent --noglob --force --nodeps -- /var/cache/zypp/packages/repo-oss/noarch/btrfsmaintenance-0.4.2-
root 18544 18503 0 01:13 pts/0 00:00:00 /bin/sh /var/tmp/rpm-tmp.vC6bEA 1
root 18587 18544 0 01:13 pts/0 00:00:00 /usr/bin/systemctl try-restart btrfsmaintenance-refresh.service btrfsmaintenance-refresh.path btrfs-balance.service btrfs-balance.timer btrfs
root 22148 2846 0 00:34 pts/0 00:00:00 sudo zypper dup
root 22151 22148 0 00:34 pts/0 00:04:25 zypper dup
root 22213 22151 0 00:34 pts/0 00:00:00 /usr/bin/systemd-inhibit --what=sleep:shutdown:idle --who=zypp --mode=block --why=Zypp commit running. /usr/bin/cat
root 22214 22213 0 00:34 pts/0 00:00:00 /usr/bin/cat
root 22215 22151 0 00:34 pts/0 00:00:00 /bin/bash /usr/lib/zypp/plugins/commit/btrfs-defrag-plugin.sh
root 22221 22151 0 00:34 pts/0 00:00:05 /usr/bin/python3 /usr/lib/zypp/plugins/commit/snapper.py
So I killed PID 18544 and the update resumed (I saved the script to examine it later).
Thank you!!
Cris
Yes, that’s the process that I would have killed. The chances are that everything is now in good shape on your system.