Annoying systemd problem when partitions change format or UUID

Developers seem to just love systemd, the init system that is slowly but steadily taking over from SysV init. As a (selfish) end-user, my feeling about init systems is that, the better they are, the less they get in the end user’s way.

I purposely partitioned my hard drive in such a way that there would be slots where I could install new OS’s, in order to try them out and play with them. This affects distros that use systemd in two ways.

First, if the filesystem no longer matches what’s in /etc/fstab, say, because I just installed a new OS on an existing partition, systemd sits there and prints an error message to VT1 that it can’t find filesystem X on partition Y, seemingly forever. Will it ever stop and allow the end user to fix the problem? Probably, but who knows how long it’ll take. I haven’t had the patience to time it yet.

My installed openSUSE doesn’t have another related problem I’ve encountered with systemd, since it references partitions in /etc/fstab by disk ID rather than by UUID, but I’ve also seen (Fedora’s) systemd spew out endless error messages after I installed a distro and it changed the UUID of a partition, because /etc/fstab had the old UUID of the partition in it.

Is there at least a way to change the amount of time it takes systemd to time out and let you go to a login prompt?

(The difference between systemd and Canonical’s upstart is that upstart fails gracefully in this situation, and systemd does not. Upstart simply prints an error message to the screen, telling you it can’t mount a partition, and allows you to skip mounting and continue booting or log into a console to fix the problem. And it doesn’t take forever to do so.)

It is not quite clear to me what you are experiencing.

You say you have partitions not used on an openSUSE system, say version A, so that you can install another version, say B on it. To me that means that those partitions are thus not in the fstab of system A. And when you then install B, I assume that you tell the installer to leave the partitions belonging to system A alone, not format them not mount them.

I do not see how these completely seperate systems can have problems with changed file systems on partitions that they shoould not use.

On the other hand, if you hve a partition and you change the file system type on it, you of course have to change e.g. from ext4 to btrfs in all the fstabs on all the systems that have an entry for it.

And you also mention changing the UUID of a partition. That normaly is not a problem in openSUSE because it uses by-id by default. But when you changed the default for reasons known to you, then there are consesquences of course.

And I fail to see the connection with systemd. But again, maybe I do not understand you at all. In that case, let this post be a hint to explain a bit more detailed. Probably with excerpts from your fstab (between CODE tags) to illustrate your case.

I don’t see how systemd has anything to do with the reported observation either???

Of course if you change partitions or uuids or labels and do not inform
fstab properly, and grub, you get problems on boot. This is not new with
systemd.

On 2014-01-25 18:36, hcvv wrote:

> And I fail to see the connection with systemd. But again, maybe I do not
> understand you at all. In that case, let this post be a hint to explain
> a bit more detailed. Probably with excerpts from your fstab (between
> CODE tags) to illustrate your case.

The problem with systemd is that it does often not fail gracefully when
there is a problem mounting the partitions it thinks it has to. System V
was much better in this respect.

Nothing we can do about that. I’m afraid.


Cheers / Saludos,

Carlos E. R.
(from 12.3 x86_64 “Dartmouth” at Telcontar)

I keep all of my partitions mounted in each of the different operating systems I run. I have to edit /etc/fstab in each one when something changes. The problem is getting to a login prompt in the first place.

To be specific (although specificity isn’t really needed), yesterday I installed a Win8.1 demo on /dev/sda9. /dev/sda9 used to have a Linux distro on it that used an ext4 filesystem. Now it had an NTFS filesystem (and a different UUID, although openSUSE uses “by-id”).

I booted into openSUSE (which, to be specific, although specificity isn’t really needed, is on /dev/sda6) to change the fstab entry from ext4 to ntfs-3g. However, instead of completing the boot process, systemd (one of its processes, actually) poured a bunch of error messages onto the terminal saying that it couldn’t find an ext4 filesystem on /dev/sda9. And wouldn’t stop.

This is related to systemd because it was only after the change to systemd that this sort of thing started happening. With SysV Init, it would have just failed immediately with an error message. With upstart (the init system used by *buntu derivitives), it fails immediately with an error message. Only with systemd as the init system does the error go on, and on, and on, and on…

So, does that make sense?

I hit ctrl+alt+del to reboot into Kubuntu and fixed openSUSE’s fstab from there.

Probably this bug:
https://bugzilla.novell.com/show_bug.cgi?id=832220

If you use the nofail option the mount should not hang if there is a problem with the partition because of changes in foreign partitions

Well, looking at the output more carefully (since the problem is easily re-creatable by changing the filesystem for /dev/sda9 in fstab to the wrong one), it seems that the issue is really that systemd is trying to put me into “emergency mode” and failing to do so.

The error message with mounting that it keeps repeating, ad infinitum, is

EXT4-fs (sda9): VFS: Can’t find ext4 filesystem

(because, of course, the filesystem on sda9 is now NTFS, not ext4). (An inspection of the journal tells me, “Failed to mount /mnt/test. Unit mnt-test.mount entered failed state. mount: wrong fs type…”, and so on. “/mnt/test” is the mountpoint for /dev/sda9.)

I see messages interspersed with those that begin, “Welcome to emergency mode!..” but the system doesn’t actually stop and give me an emergency mode login prompt; it just continues on displaying the error message.

Perhaps systemd is mis-configured.

As I already wrote, it’s this bug:https://bugzilla.novell.com/show_bug.cgi?id=832220

And apparently they fixed it only recently (i.e. yesterday), because they were not able to reproduce it before.
IIUIC the syslog.services (rsyslog.service, syslog-ng.service, and syslogd.service) cause this when active.

I am not being critical here, this is a genuine question:

… But, why would you keep All of the partitions mounted in All of the operating systems at all times? Why not just mount those you need, and temporarily mount one of the other ones when you need it for only the amount of time that you need it?

It would seem to me that doing what you are, is only asking for trouble, including security problems.

-fb

As one could allready guess from my earlier post, I am fully with you. I e.g. have three partitions where I use one for/home and one for /. I use the other on for installing a new version of openSUSE. This allows me to dual boot between them when needed. And I mount the other system’s / somewhere read-only. Thus I have easy access to old configuration files (e.g… in /etc). After some time, when everything is stable, I remove that entry from the fstab. The partitions is then free for new usage. In other words, I am a fan of using ntelligence and care in mounting partitions. And of course specialy when it is about non Linux file system types.

Apart from this, there is a bug (I assume it is the same one that bothers some NFS users) and I am glad that it is repaired in the near future. But IMHO the bug may point to the fact that someone is using practices that are not the best. Allways nice to learn something from such a bug. :wink:

On 2014-01-26 08:26, Fraser Bell wrote:
>
> eco2geek;2619289 Wrote:
>> I keep all of my partitions mounted in each of the different operating
>> systems I run.
>>
>
> I am not being critical here, this is a genuine question:
>
> … But, why would you keep All of the partitions mounted in All of the
> operating systems at all times? Why not just mount those you need, and
> temporarily mount one of the other ones when you need it for only the
> amount of time that you need it?
>
> It would seem to me that doing what you are, is only asking for trouble,
> including security problems.

No, no. Having all those partitions mounted doesn’t really affect the
problem. It is a circumstance that people that do testing on multiple
systems notice, but it is not their fault at all.

The thing is, you have several linuxes installed on the same computer.
All of them have fstab for all the partitions (you need access to them).
It doesn’t matter if they are not mounted automatically (because fsck
fails first).

So, if you do a change for the partitioning in one of the systems, the
rest are affected, all the fstabs of all of them have to be modified.
You may forget to correctly change some of them.

Previously, when we had system V, it coped graciously with this
situation: boot aborted and you were dumped to emergency mode.

Systemd fails, it does not reach emergency mode. It loops. This is a
bug. It may be corrected, this time, but it is not the first time that
it happens. I have old bugzillas on it.

Notice that this particular issue can happen fortuitously to any body,
with a single system installed: you just need fsck to fail during boot.


Cheers / Saludos,

Carlos E. R.
(from 12.3 x86_64 “Dartmouth” at Telcontar)