Boot time more than 2 minutes...


Have a Dell machine with 12x Intel Xeon CPU X5670 @ 2.93 GHz and 24 GB RAM with TW 64 KDE running for about 1 year (currently updated to 20171206).

IPv6 is disabled in Networkmanager (set zo “Ignored” on the respective tab of the widget) as well as in the Yast “Network Settings” (“Enable IPv6” not checked).

Booting takes more than 2 minutes and dmesg shows afterwards:

   25.820150] floppy0: no floppy controllers found
   25.820162] work still pending
  112.613145] ip6_tables: (C) 2000-2006 Netfilter Core Team
  112.711687] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
  112.721935] ip_tables: (C) 2000-2006 Netfilter Core Team
  113.531287] Netfilter messages via NETLINK v0.30.
  113.535704] IPv6: ADDRCONF(NETDEV_UP): enp6s0: link is not ready
  113.603392] IPv6: ADDRCONF(NETDEV_UP): enp6s0: link is not ready
  115.197420] tg3 0000:06:00.0 enp6s0: Link is up at 100 Mbps, full duplex
  115.197425] tg3 0000:06:00.0 enp6s0: Flow control is off for TX and off for RX
  115.197442] IPv6: ADDRCONF(NETDEV_CHANGE): enp6s0: link becomes ready
  115.242046] NET: Registered protocol family 17
  138.748303] nf_conntrack: default automatic helper assignment has been turned off for security reasons and CT-based  firewall rule not found. Use the iptables CT target to attach helpers instead.

and I get:

systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character. @1min 35.352s
└─ @1min 35.352s
  └─cron.service @1min 32.581s
    └─postfix.service @1min 31.469s +1.110s
      └─ @1min 31.468s
        └─ntpd.service @1min 31.411s +56ms
          └─ @1min 31.410s
            └─NetworkManager.service @1min 31.314s +95ms
              └─SuSEfirewall2_init.service @1min 30.312s +1.000s
                └─ @1min 30.309s
                  └─ @1min 30.309s
                    └─dev-disk-by\x2duuid-f67e0a50\x2d67a3\x2d4b8a\x2db85e\x2da7098ce09e71.swap @714ms +9ms
                      └─dev-disk-by\x2duuid-f67e0a50\x2d67a3\x2d4b8a\x2db85e\x2da7098ce09e71.device @710ms

but also:

 systemd-analyze blame | cat
          5.035s purge-kernels.service
          2.198s logrotate.service
          1.110s postfix.service
          1.000s SuSEfirewall2_init.service
           645ms apparmor.service
           624ms display-manager.service
           594ms initrd-switch-root.service
           535ms dev-sda2.device
           409ms SuSEfirewall2.service
           391ms systemd-journal-flush.service
           330ms ModemManager.service
           316ms systemd-fsck@dev-disk-by\x2duuid-d7ec36cf\x2d9f10\x2d4109\x2dbfa7\x2d8fbfdd52e11e.service
           293ms systemd-hwdb-update.service
           284ms upower.service
           256ms alsa-restore.service
           256ms rc-local.service
           254ms issue-generator.service
           253ms mcelog.service
           252ms nscd.service
           230ms systemd-fsck@dev-disk-by\x2duuid-44d225ce\x2d52bb\x2d44b6\x2db3be\x2d0de97d8c7a72.service
           227ms systemd-udevd.service
           118ms udisks2.service
           117ms systemd-vconsole-setup.service
            95ms NetworkManager.service
            86ms home-usser-Data2raid.mount
            79ms home-usser-Data1raid.mount
            73ms systemd-tmpfiles-setup-dev.service
            71ms systemd-udev-trigger.service
            56ms ntpd.service
            55ms initrd-parse-etc.service
            50ms polkit.service
            50ms systemd-sysusers.service
            44ms dracut-cmdline.service
            37ms systemd-fsck@dev-disk-by\x2duuid-a3b94f48\x2d654b\x2d4081\x2d9e74\x2d808173c54162.service
            36ms systemd-tmpfiles-clean.service
            34ms iscsi.service
            33ms user@1000.service
            25ms systemd-fsck-root.service
            24ms auditd.service
            19ms systemd-logind.service
            18ms systemd-sysctl.service
            18ms home.mount
            17ms systemd-remount-fs.service
            15ms plymouth-switch-root.service
            14ms plymouth-read-write.service
            14ms systemd-journald.service
            13ms plymouth-start.service
            13ms systemd-tmpfiles-setup.service
            11ms sys-kernel-debug.mount
            10ms dracut-pre-trigger.service
            10ms dev-hugepages.mount
            10ms kmod-static-nodes.service
             9ms sysroot.mount
             9ms dev-disk-by\x2duuid-f67e0a50\x2d67a3\x2d4b8a\x2db85e\x2da7098ce09e71.swap
             9ms systemd-journal-catalog-update.service
             8ms initrd-cleanup.service
             6ms dev-mqueue.mount
             6ms systemd-modules-load.service
             6ms mdmonitor.service
             5ms rtkit-daemon.service
             4ms systemd-update-utmp.service
             4ms systemd-update-utmp-runlevel.service
             4ms systemd-random-seed.service
             3ms systemd-update-done.service
             3ms systemd-user-sessions.service
             3ms sys-fs-fuse-connections.mount
             2ms initrd-udevadm-cleanup-db.service
             2ms dracut-shutdown.service

Does anybody have an idea why the machine sits there and does nothing from 25 sec after starting to 112 sec?

Many thanks in advance…

PS: The OS is installed on an Intel SSD

Model Family: Intel 53x and Pro 2500 Series SSDs
Device Model: INTEL SSDSC2BW120H6

with 4388 h power on and 179 boot cycles (no error reported in SMART)…

Did you have a look at the console messages during boot ?

It waits 90 seconds for this disk to appear (and I guess disk is not present). Check your /etc/fstab and command line.

…only thing at command line is:

kvm: disabled by bios

In fstab I find:

UUID=f67e0a50-67a3-4b8a-b85e-a7098ce09e71 swap                 swap       defaults              0 0
UUID=74c7ffa8-ff7f-48a5-b10c-f06084e301bd /                    ext4       acl,user_xattr        1 1
UUID=a3b94f48-654b-4081-9e74-808173c54162 /home                ext4       acl,user_xattr        1 2

UUID=44d225ce-52bb-44b6-b3be-0de97d8c7a72 /home/usser/Data2raid/ ext4       user,acl              1 2
UUID=d7ec36cf-9f10-4109-bfa7-8fbfdd52e11e /home/usser/Data1raid/ ext4       user,acl              1 2
UUID=2b147284-6b3a-4eea-ae98-c9df8b1a49e0 swap                 swap       nofail                0 0

…but I have not messed arround with that, the Dataraids have been added via YaST.

Why is there SWAP twice? Which one to delete? I don’t need any swap with 24GB RAM, I guess?

Opinions vary, but I wouldn’t use swap with 24GB of RAM. And, I’d comment out both the swap entries in fstab ( put a # at the beginning of the entry ), reboot and see what happens.

Will try that!

If it suceeds, can I delete the swap partition of the SSD? Heard that it’s an advantage to have some non-partitioned space on SSD (reserve for wear-out), is that correct?

PS: While editing fstab I found a file “” from April (I swear I havn’t created that!) In there I find only ONE swap (the first one). Any chance TW created this on its own?

…back after reboot: :slight_smile:

Much better now:

systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character. @3.845s
└─ @3.845s
  └─cron.service @3.845s
    └─postfix.service @2.726s +1.118s
      └─ @2.725s
        └─ntpd.service @2.666s +58ms
          └─ @2.664s
            └─NetworkManager.service @2.572s +91ms
              └─SuSEfirewall2_init.service @1.629s +942ms
                └─ @1.628s
                  └─systemd-update-utmp.service @1.623s +4ms
                    └─auditd.service @1.598s +24ms
                      └─systemd-tmpfiles-setup.service @1.579s +18ms
                        └─ @1.577s
                          └─home-usser-Data2raid.mount @1.475s +101ms
                            └─systemd-fsck@dev-disk-by\x2duuid-44d225ce\x2d52bb\x2d44b6\x2db3be\x2d0de97d8c7a72.service @993ms +480ms
                              └─dev-disk-by\x2duuid-44d225ce\x2d52bb\x2d44b6\x2db3be\x2d0de97d8c7a72.device @992ms

…although 22 sec until the “kvm: disabled by bios” message appears in console is still quite a lot, I guess…

I have an idea how this second swap partition might have ended in fstab:

I think I remember now that it was in April that I plugged in another TW64 KDE (on another SSD) via the eSATA port of the machine, updated and rebooted.

Might this result in the addition of swap to the fstab of the SSD directly attached to an internal SATA port?

No. That would be a bug of epic proportion, but I’ve never seen that before, so I don’t think it is a bug. My bet is that the first swap space was already on disk, when you installed and created a second one. Can’t think of any other way.

What I can definitely exclude is that I created a swap. I never created the described above and I don’t need any swap on this machine. Especially not on a SSD that is not present.

I have a triple boot (42.2, 42.3 and TW, all 64bit KDE) on another machine. Whenever I update TW (absolute reproducible) and reboot, I end with the login to 42.3. If I enter YaST, Tumbleweed is first in boot options. I confirm this and next boot goes to TW. If I don’t go to YaST, next boot again goes to 42.3.

So there is some kind of crisscross when TW is updated in the presence of other OS’s on the same computer…

However, for this alternative SSD with TW I get:

lsblk -no NAME,UUID
├─sda1 e1029cef-e439-45a1-bf27-94f9148e11c7
├─sda2 4423e9d1-aea5-4dcf-9818-e4928d79cdb5
└─sda3 683f5077-c1d1-4931-bf20-89b480234f9c

…where sda1 is the swap of this install. Are these UUIDs stable?

I have no idea what changed this fstab. Nobody except me has the root password for this machine. I never wrote it down.

Noticing a couple of things:

  • Apparently TW does not control GRUB on it’s own, so updating 42.3 could lead to this misbehaviour
  • Did you upgrade TW on the extra SSD ?

Yes, but I upgraded TW, not 42.3… :frowning:

In the case with the triple boot: all OS on the same HDD. In case of the swap poping up in fstab: Yes, I updated TW on the extra SSD pluged into the machine via eSATA port…