Tumbleweed only start in emergency mode.

Tumbleweed installation is only starting up in emergency mode, after splash screen?

Hi

Welcome to the forum here!

You are not giving much information.
I would compare it to somebody writing “I can’t walk anymore”, after he broke his leg :wink:

If you search the internet for emergency mode, then you will find that this not rarely is linked to failing hard disks or unwanted cuts of power supply.

And perhaps you may need to reinstall tumbleweed, but if your hard disks (or SSD) fails, then you should replace that one beforehand.

So, please give a bit more information.

(1) What hardware do you have?

(2) Are there other operating systems on your PC / Laptop?

(3) In which way did you install - from DVD, or USB stick? And did you verify the checksum of the tumbleweed installer image that you used?

(4) Are you using an LVM or encryption, or not?

etc.

I’m having this issue on a long running stable tubleweed after upgrading, today, via zypper dup

from the journalctl in emergency mode

systemd: dev-system-home.device: Job dev-system-home.device.start timed out

end of the logs have a “Dependency failed for home” before getting emergency console (likely due to the above).

Earlier, there is another error I don’t know if its related to failure to mount home, (including in the event its not a red herring).

systemd-udev: Specified group 'plugdev' unknown

I can’t use snapper to roll back because I get the following error from the emergency console

snapper list
Failure (dbus fatal exception)

Any suggestions advice to get the system back up and running is appreciated. Thanks

Sorry, no is a long running TW, new ssd, all about 9 months old, ssd and installation.
Have no errors from which i can find.
I did find on another thread that graphics card might have something to do with it, I have a Radeon x850.

I don’t think its graphics card related. Its todays snapshot. I rolled back to the previous snapshot (prior to the one today) and don’t have the issue. I upgraded the snapshot and have the same issue emergency boot.

1- Have both LVM and encryption.

There are LVM upgrades in todays snapshot.

Looking at the logs the difference between the successful booting system and the emergency system occurs right after.
Subject: A start job for unit apparmor.service has finished successfully (both non-booting and booting system successfully apply apparmor.service). Immediately after apparmor.service they diverge

– for the rolled back working system, without today’s LVM changes I have
localhost lvm[1029]: 3 logical volume(s) in volume group “system” now active

– for today’s snapshot with the LVM changes I have
systemd[1]: dev-system-home.device: Job dev-system-home.device/start timed out.

The LVM volume group never becomes active in the non booting system.

Just fine that you managed to roll back to the previous snapshot !

Could you report back when the upgrades work again any time later and the problem doesn’t occur any more?

Probably a problem with language: I understood this in that way, that you were talking about a fresh or new installation of Tumbleweed, until you later posted

Has the (temporary) solution found by puffy been of any help to you?

Ideally having the grub2-snapper-plugin from the boot screen means you could rollback from the boot screen and be on your way, I did not have that and was stuck on the dbus failure in emergency mode.
someone kindly pointed me to --no-dbus option. You can issue that from emergency mode.

get a list of snapshots, note the number of the snapshot prior to the problem

snapper --no-dbus list

rollback so you can boot normally.

snapper --no-dbus rollback <the number of the last stable snapshot>

I want to point out the error udev plugdev group not found error also occurs in the working boot… so it is unlikely part of the problem with the lvm group not becoming active systemd timing out on finding home.

I will try snapshots later released and report back when I’m able to boot the latest snapshot.

having the same problem after every subsequent snapshot: LVM failing to become active, and systemd timing out mounting home. (resulting in emergency prompt)

I locked the following packages:

liblvm2app2_2 liblvm2cmd2_02 lvm2-clvm lvm2-cmirrord lvm2

After locking the above packages, I was able to boot to the latest snapshot 2019-09-30 normally (fails without locking those packages and upgrading).
I haven’t seen any other posts about LVM problems after those package upgrades, and there are no LVM problems with the older packages.

I’m using an encrypted LVM. But I am not having problems.

I did update today on two systems. And they both booted normally after the update.

There is an open bug report for LVM problems with Tumbleweed. But it seems to be associated with an LVM that spans multiple disks. I don’t have that. I’m just using a single partition with is encrypted and then used for an LVM containing home, root, swap volumes.

I’m glad few others are experiencing the issue, and wish I could provide more information about why the LVM isn’t becoming active, it just never becomes active after the monitoring phase and systemd times out after 90 seconds. If there is an open bugzilla ticket open on the most recent packages would you kindly link it, I couldn’t find it?

snapper really takes the sting out, providing a safety net for those like me that have a tendency to fall flat or crash and burn

This is the one that I noticed. I don’t know whether it is the same problem that you are having:

Bug 1152378 - lvm volume fails to activate after upgrade to lvm

](https://forums.opensuse.org/member.php/49759-nrickert)nrickert I appreciate you linking that bug report. I too have a message in the journalctl output about part size mismatch in the pvscan, its corresponds to the home partition that isn’t activated. The package affected is the one that causes the emergency boot when upgraded. The reporters issue looks identical to the one I saw. Looks like its already been patched too. I’ll watch that ticket and upgrade to the patched package when it becomes generally available.

Thanks for supporting the community, I’ve seen several informative posts you’ve made going back years.

with high regards.

Yes, that does sound similar to what I saw in the bug report.

I’ll note that I just check the opensuse-bugs mailings list archive every day, and look at reports that might affect me. Since I’m using an encrypted LVM, I did look at that one – which is why I knew that there was an open bug report.

Snapshot 2019-10-11 contained fixes to the lvm2 package and libraries that fixed my emergency boot issue. I unlocked the packages listed earlier in the the thread, zypper dup updated and removed some of the previously locked packages. System boots normally.

The ticket nrickert kindly linked still shows as “new”, however fixes were rolled into 2019-10-11.