I noticed that my computer seems to boot slower than it used to

I noticed that my computer seems to boot slower than it used to, what steps can i take to mitigate this?
I noticed this behavior when i made a fresh leap install on a slower/older machine with a chipset 2 versions prior to my main computer but this older machine seems to boot quicker and use less CPU resources, is this just a tumbleweed vs leap thing?

also what is “baloo”?

If you’ve added services that need to start at boot time, that can increase the boot time, depending on what those services are.

As we can’t look at your system, we need additional information. “Slower” is a non-quantitative measure, so it’s hard to judge how serious the issue (if there is one) may be. If you can provide information about what “slower” means to you (for example, did it used to boot in 30 seconds, but now it takes 5 minutes?), that can help.

More specifically, there is a tool that can provide boot time data - systemd-analyze - specifically with the blame and critical chain parameters (run separately, you can’t use them both at the same time).

That’ll give you some idea as to what’s taking time during startup. blame gives you info about each unit that starts. critical-chain tells you the times and execution order of things that are necessary to get the system running (other tasks may start in the background and may not actually be holding up the startup).

systemd-analyze
my computer other computer
kernel 850ms 900ms
initrd 4.212s 3.226s
userspace 8.858s 6.763s
graphical.target 8.858s 7.754s

I was unable to use the parameters you provided

usr_40476@localhost:~> systemd-analyze --blame
systemd-analyze: unrecognized option '--blame'
usr_40476@localhost:~> 
usr_40476@localhost:~> systemd-analyze --critical-chain
systemd-analyze: unrecognized option '--critical-chain'
usr_40476@localhost:~> 

would putting the system in UEFI mode help boot times? I was feeling lazy at the time of installation on my tumbleweed machine and skipped that, should that be a step i take?

INFO here is the table with correct info

my computer other computer
kernel 850ms 900ms
initrd 4.212s 3.226s
userspace 8.858s 6.763s
total 13.921s 10.890s

The parameters don’t use -- in front of them.

Just systemd-analyze blame or systemd-analyze critical-chain

man systemd-analyze will give you more information.

FYI, 8 seconds isn’t terrible. Mine takes ~25 seconds to boot (I have several services that take 4-7 seconds to start up, but they start in parallel - the critical path shows this for my system:

[jhenderson@TheEarth ~]$ systemd-analyze critical-chain
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.

graphical.target @25.871s
└─multi-user.target @25.871s
  └─getty.target @25.871s
    └─getty@tty1.service @25.870s
      └─plymouth-quit-wait.service @4.848s +20.921s
        └─systemd-user-sessions.service @4.757s +43ms
          └─network.target @4.719s
            └─NetworkManager.service @3.974s +744ms
              └─network-pre.target @3.953s
                └─wpa_supplicant.service @10.068s +30ms
                  └─dbus.service @2.909s +26ms
                    └─basic.target @2.851s
                      └─sockets.target @2.851s
                        └─pcscd.socket @2.851s
                          └─sysinit.target @2.848s
                            └─auditd.service @2.791s +56ms
                              └─systemd-tmpfiles-setup.service @2.697s +89ms
                                └─systemd-journal-flush.service @1.832s +804ms
                                  └─var.mount @1.813s +16ms
                                    └─dev-nvme0n1p4.device @584542y 2w 2d 20h 1min 46.593s +4.749s

In this output, you can se that the plymouth-quit-wait.service takes 20 seconds to run, so if I was going to troubleshoot this, I’d start by looking at that service because it’s in the critical path.

The other service that systemd-analyze blame shows on my system is the backup-rpmdb.service - that’s not in the critical path, but it’s taking 16 seconds to run:

20.921s plymouth-quit-wait.service
16.036s backup-rpmdb.service
 7.213s wazuh-agent.service
 6.644s docker.service
 5.529s NetworkManager-wait-online.service
 4.765s dev-ttyS1.device
 4.765s sys-devices-platform-serial8250-tty-ttyS1.device
 4.763s dev-ttyS0.device
 4.763s sys-devices-platform-serial8250-tty-ttyS0.device
 4.762s dev-ttyS10.device
 4.762s sys-devices-platform-serial8250-tty-ttyS10.device
 4.759s dev-ttyS11.device
 4.759s sys-devices-platform-serial8250-tty-ttyS11.device
 4.758s dev-ttyS13.device
 4.758s sys-devices-platform-serial8250-tty-ttyS13.device
 4.757s sys-devices-platform-serial8250-tty-ttyS14.device
 4.757s dev-ttyS14.device
 4.756s sys-devices-platform-serial8250-tty-ttyS16.device
 4.756s dev-ttyS16.device
 4.756s dev-ttyS12.device
[...]

(There’s multiple pages of info, so this is just the start).

These are what my system shows - your system will likely show something different, but again, 8 seconds is pretty good, and not something I’d be terribly concerned about.

1 Like

My concern sprouted from the older/inferior chipset machine being quicker to boot up and having lower idle usage, in case you were wondering, anyway here is the command output you asked for

(command output) \/ \/ \/ \/ \/
usr_40476@localhost:~> systemd-analyze blame
15.974s backup-rpmdb.service
 5.182s dev-ttyS0.device
 5.182s sys-devices-platform-serial8250-tty-ttyS0.device
 5.177s dev-ttyS12.device
 5.177s sys-devices-platform-serial8250-tty-ttyS12.device
 5.174s sys-devices-platform-serial8250-tty-ttyS10.device
 5.174s dev-ttyS10.device
 5.158s sys-devices-platform-serial8250-tty-ttyS11.device
 5.158s dev-ttyS11.device
 5.153s dev-ttyS4.device
 5.153s sys-devices-pci0000:00-0000:00:16.3-tty-ttyS4.device
 5.152s sys-devices-platform-serial8250-tty-ttyS13.device
 5.152s dev-ttyS13.device
 5.152s dev-ttyS1.device
 5.152s sys-devices-platform-serial8250-tty-ttyS1.device
 5.151s dev-ttyS17.device
 5.151s sys-devices-platform-serial8250-tty-ttyS17.device
 5.150s dev-ttyS19.device
 5.150s sys-devices-platform-serial8250-tty-ttyS19.device
 5.150s dev-ttyS14.device
 5.150s sys-devices-platform-serial8250-tty-ttyS14.device
 5.143s dev-ttyS18.device
 5.143s sys-devices-platform-serial8250-tty-ttyS18.device
 5.136s dev-ttyS15.device
 5.136s sys-devices-platform-serial8250-tty-ttyS15.device
 5.135s dev-ttyS16.device
 5.135s sys-devices-platform-serial8250-tty-ttyS16.device
 5.134s dev-ttyS2.device
 5.134s sys-devices-platform-serial8250-tty-ttyS2.device
 5.132s dev-ttyS22.device
 5.132s sys-devices-platform-serial8250-tty-ttyS22.device
 5.129s sys-devices-platform-serial8250-tty-ttyS31.device
 5.129s dev-ttyS31.device
 5.128s sys-devices-platform-serial8250-tty-ttyS9.device
 5.128s dev-ttyS9.device
 5.128s dev-ttyS21.device
 5.128s sys-devices-platform-serial8250-tty-ttyS21.device
 5.127s dev-ttyS23.device
 5.127s sys-devices-platform-serial8250-tty-ttyS23.device
 5.125s dev-ttyS20.device
 5.125s sys-devices-platform-serial8250-tty-ttyS20.device
 5.125s sys-devices-platform-serial8250-tty-ttyS7.device
 5.125s dev-ttyS7.device
 5.125s sys-devices-platform-serial8250-tty-ttyS8.device
 5.125s dev-ttyS8.device
 5.125s sys-devices-platform-serial8250-tty-ttyS25.device
 5.125s dev-ttyS25.device
 5.124s dev-ttyS5.device
 5.124s sys-devices-platform-serial8250-tty-ttyS5.device
 5.119s sys-devices-platform-serial8250-tty-ttyS6.device
 5.119s dev-ttyS6.device
 5.118s dev-ttyS3.device
 5.118s sys-devices-platform-serial8250-tty-ttyS3.device
 5.117s dev-ttyS24.device
 5.117s sys-devices-platform-serial8250-tty-ttyS24.device
 5.116s sys-devices-platform-serial8250-tty-ttyS29.device
 5.115s dev-ttyS29.device
 5.114s dev-ttyS27.device
 5.114s sys-devices-platform-serial8250-tty-ttyS27.device
 5.109s sys-devices-platform-serial8250-tty-ttyS26.device
 5.109s dev-ttyS26.device
 5.108s sys-devices-platform-serial8250-tty-ttyS30.device
 5.108s sys-devices-platform-serial8250-tty-ttyS28.device
lines 1-63
usr_40476@localhost:~> systemd-analyze critical-chain 
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.

graphical.target @8.858s
└─multi-user.target @8.857s
  └─cron.service @8.857s
    └─postfix.service @7.472s +1.361s
      └─network.target @7.405s
        └─NetworkManager.service @6.529s +872ms
          └─network-pre.target @6.505s
            └─wpa_supplicant.service @7.517s +222ms
              └─dbus.service @4.795s +156ms
                └─basic.target @4.767s
                  └─sockets.target @4.766s
                    └─snapd.socket @4.740s +24ms
                      └─sysinit.target @4.722s
                        └─systemd-backlight@leds:g15::kbd_backlight.service @5.805s +54ms
                          └─system-systemd\x2dbacklight.slice @2.668s
                            └─system.slice
                              └─-.slice

@40476 Transient maintenance services running like the above…

I am confused, what does that mean?

@40476 system maintenance tasks that occur man db rebuild, rpm db rebuild, disk checks etc all happen at different times, after X number of boots, after updating packages etc. Hence the difference in boot times.

I think i have one more question, will enabling UEFI help boot times at all? I neglected to set it up since I was feeling lazy at the time of installation.

@40476 maybe, hard to say as I think it varies on hardware… be happy the system boots :wink:

At least I use to be able to go make a coffee while the system came up, now can only turn my back…

1 Like

The systems likely run different services. The older system probably runs fewer services.

I don’t really see anything to be concerned by here. You’re running postfix, which you may not need (depends on what you’re using the system for), and that’s adding ~1.3 seconds to the boot time.

But again, 8 seconds isn’t bad at all, and I don’t really see anything to be concerned by. In normal use, rebooting a system is not a frequent occurrence. I reboot my system maybe a couple of times a week when I do updates, if it’s needed.

The effect, if any, would be tiny.

What boot method you use is involved in loading the kernel and “initrd”. But most of your boot time is after the kernel has been loaded.

For me, the largest recent change, is that using an SSD does speed things up. And using encryption does slow things down.

1 Like

Unlikely. That’s just what’s used to get the system started. You might also reduce the grub timer if that 8 second wait before the boot process starts is bothering you.

But IMO, focusing on optimizing an 8 second startup is probably not a good use of time, apart from learning how to diagnose actual issues with system startup. :slight_smile:

1 Like

Thank you all for this information, this has helped greatly, and, I have also learned some “cool” new commands :confetti_ball:, also (seperate thing) I am trying to impress somebody and I want to know some cool programs to put on their new computer, what programs would you reccomend?

I’m not even getting that much time, Malcolm, and I am running a 9-yr-old Desktop PC, an Asus.

@Fraser_Bell It’s all very subjective, some systems only show initrd, userspace, some kernel, initrd and userspace and some firmware, loader, kernel, initrd and userspace.

2 Likes

How long ago was that? Kernels and initrds are bigger than they used to be:

12.1: 
> ls -hlrtgG /disks/s121/boot/vmlinuz-3*
-rw-r--r-- 1 4.8M Feb 27  2013 /disks/s121/boot/vmlinuz-3.1.10-1.19-desktop
-rw-r--r-- 1 4.8M Jun  7  2013 /disks/s121/boot/vmlinuz-3.1.10-1.29-desktop
13.1:
> ls -hlrtgG /disks/s131/boot/vmlinuz-3*
-rw-r--r-- 1 5.2M Jan 19  2016 /disks/s131/boot/vmlinuz-3.12.51-2-desktop
-rw-r--r-- 1 5.2M Jul  1  2016 /disks/s131/boot/vmlinuz-3.12.59-47-desktop
-rw-r--r-- 1 5.2M Oct 20  2016 /disks/s131/boot/vmlinuz-3.12.62-55-desktop
-rw-r--r-- 1 5.2M Dec 12  2016 /disks/s131/boot/vmlinuz-3.12.67-64-desktop
15.0:
> ls -hlrtgG /disks/s150/boot/vmlinuz-4*
-rw-r--r-- 1 6.8M May 14  2019 /disks/s150/boot/vmlinuz-4.12.14-lp150.12.61-default
-rw-r--r-- 1 6.8M Aug 11  2019 /disks/s150/boot/vmlinuz-4.12.14-lp150.12.70-default
-rw-r--r-- 1 6.8M Oct  9  2019 /disks/s150/boot/vmlinuz-4.12.14-lp150.12.76-default
-rw-r--r-- 1 6.8M Nov 12  2019 /disks/s150/boot/vmlinuz-4.12.14-lp150.12.82-default
15.5:
> ls -hlrtgG /boot/vmlinuz-5*
-rw-r--r-- 1  12M May 17  2023 /boot/vmlinuz-5.14.21-150500.53-default
-rw-r--r-- 1  12M Oct  6 03:34 /boot/vmlinuz-5.14.21-150500.55.31-default
-rw-r--r-- 1  12M Jan 16 05:05 /boot/vmlinuz-5.14.21-150500.55.44-default
-rw-r--r-- 1  12M Feb 12 04:31 /boot/vmlinuz-5.14.21-150500.55.49-default

All above is from one Pentium 4 630 Hyperthreading PC w/ 2G RAM freshly gathered today, same as below.

> inxi -CMS
System:
  Host: gx62b Kernel: 3.12.67-64-desktop arch: x86_64 bits: 64
  Desktop: KDE Plasma v: 4.11.5 Distro: openSUSE 13.1 (Bottle)
Machine:
  Type: Desktop System: Dell product: OptiPlex GX620 v: N/A serial: DQ7Q891
  Mobo: Dell model: 0F8101 serial: ..CN698615CL0E65. BIOS: Dell v: A11
    date: 11/30/2006
CPU:
  Info: single core model: Intel Pentium 4 bits: 64 type: MT cache: L2: 2 MiB
  Speed (MHz): avg: 3192 min/max: N/A cores: 1: 3192 2: 3192
> systemd-analyze
Startup finished in 11.160s (kernel) + 40.885s (userspace) = 52.045s

It’s hard to compare using only systemd-analyze, because it is obviously outputting differently now from how it was with the version from 13.1 in 2016, when the 3.12.67 kernel was installed here. Systemd has had a lot of time meanwhile to become more efficient at initialization. Measuring visually from stopwatch, 13.1 brought up the login screen in 70 seconds. 15.5 took 43 to do what’s ostensibly the same thing. All / filesystems on the PC’s SSD are 6,400MB EXT3.

> inxi -CMS
System:
  Host: gx62b Kernel: 5.14.21-150500.55.49-default arch: x86_64 bits: 64
  Desktop: TDE (Trinity) v: R14.1.1 Distro: openSUSE Leap 15.5
Machine:
  Type: Desktop System: Dell product: OptiPlex GX620 v: N/A serial: DQ7Q891
  Mobo: Dell model: 0F8101 serial: ..CN698615CL0E65. BIOS: Dell v: A11
    date: 11/30/2006
CPU:
  Info: single core model: Intel Pentium 4 bits: 64 type: MT cache: L2: 2 MiB
  Speed (MHz): avg: 3192 min/max: N/A cores: 1: 3192 2: 3192
> systemd-analyze
Startup finished in 3.027s (kernel) + 12.979s (initrd) + 14.039s (userspace) = 30.045s
graphical.target reached after 13.995s in userspace
>

Looking for more comparability from systemd-analyze, I booted 15.0 too. It took 68 seconds on the clock to get to the greeter:

> systemd-analyze
Startup finished in 1.931s (kernel) + 11.660s (initrd) + 1min 26.099s (userspace) = 1min 39.692s
> inxi -CMS
System:
  Host: gx62b Kernel: 4.12.14-lp150.12.82-default arch: x86_64 bits: 64
  Desktop: TDE (Trinity) v: R14.0.6 Distro: openSUSE Leap 15.0
Machine:
  Type: Desktop System: Dell product: OptiPlex GX620 v: N/A serial: DQ7Q891
  Mobo: Dell model: 0F8101 serial: ..CN698615CL0E65. BIOS: Dell v: A11
    date: 11/30/2006
CPU:
  Info: single core model: Intel Pentium 4 bits: 64 type: MT cache: L2: 2 MiB
  Speed (MHz): avg: 3192 min/max: N/A cores: 1: 3192 2: 3192
>

12.1 here has no working systemd-analyze. Boot time to greeter on stopwatch was 55 seconds. TW20240310 booted 14M 6.6.21 kernel to greeter in 46 seconds, and:

> systemd-analyze
Startup finished in 3.488s (kernel) + 11.561s (initrd) + 16.431s (userspace) = 31.481s
graphical.target reached after 16.428s in userspace.>

1m30s on my laptop :older_man:. I don’t think you need to worry about 13s.

Startup finished in 10.996s (firmware) + 27.463s (loader) + 739ms (kernel) + 6.681s (initrd) + 37.715s (userspace) = 1min 23.596s 
graphical.target reached after 37.715s in userspace.

I do kexec reboots to avoid firmware and loader wait times. loader (grub) could be lower with quick reflexes.

This is the file indexer on kde, it’s only useful if you search for files or want to sort a folder by image width for example, it can take some ressources to run since it need to read your disk frequently.
You can set which folder is indexed or disable baloo entirely on system settings->Search->File Search