postfix slow start - should I disable it?

In this upgrade from 15.1 -> 15.2, a Ryzen 3 2200G:


~> systemd-analyze blame
         **13.188**s postfix.service
         ** 4.995**s wicked.service
          2.723s display-manager.service
          2.206s plymouth-quit-wait.service
          2.174s systemd-udev-settle.service
          1.301s btrfsmaintenance-refresh.service
          1.236s dracut-initqueue.service
          ....

In my 15.1 box, a Ryzen 5 2600:

5.073s nmb.service
          3.169s dracut-initqueue.service
          3.015s btrfsmaintenance-refresh.service
          1.529s systemd-udev-settle.service
          1.169s display-manager.service
          **1.025**s wicked.service
           602ms vboxdrv.service
           592ms jexec.service
           **545**ms postfix.service
           435ms plymouth-quit-wait.service

I’m curious why the big difference between the services, and if I can disable/uninstall postfix. Both are Plasma desktop, not remotely managed.

Thanks,

Hi
Your only using ipV4, if so go and edit /etc/postfix/main.cf and change (it’s way down the bottom of the file line 706) inet_protocols = all to inet_protocols = ipv4 and all should be good. For wicked, edit the config /etc/sysconfig/network/config and change WAIT_FOR_INTERFACES=“30” to a 1

Malcolm, are you a wizard? Or do you have a backdoor to my PCs?

You nailed it again. The 15.1 (fast) box has inet_protocols = ipv4 and WAIT_FOR_INTERFACES= 1,
and the new 15.2 (slow) box had the default values, all and “30”

After changing them:
:~> systemd-analyze blame
2.601s display-manager.service
2.140s systemd-udev-settle.service
2.086s plymouth-quit-wait.service
1.282s btrfsmaintenance-refresh.service
1.276s dracut-initqueue.service
1.022s wicked.service
496ms postfix.service
467ms udisks2.service

:~> systemd-analyze blame
          2.601s display-manager.service
          2.140s systemd-udev-settle.service
          2.086s plymouth-quit-wait.service
          1.282s btrfsmaintenance-refresh.service
          1.276s dracut-initqueue.service
         ** 1.022**s wicked.service
           **496**ms postfix.service
           467ms udisks2.service

Simply wonderful, you’re my hero!

Selected Yast2 lan > Global Options > Network Setup Method > Network Services Disabled and enabled:

**3400G:~ #** systemctl list-unit-files systemd-networkd.service systemd-resolved.service                
UNIT FILE                STATE   VENDOR PRESET
systemd-networkd.service **enabled ****disabled     **
systemd-resolved.service **enabled ****disabled     **

2 unit files listed. 
**3400G:~ #** 
**3400G:~ #** systemd-analyze                
Startup finished in 9.269s (firmware) + 3.533s (loader) + 2.054s (kernel) + 2.138s (initrd) + 1.682s (userspace) = 18.677s  
graphical.target reached after 1.670s in userspace 
**3400G:~ #**
**3400G:~ #** systemd-analyze critical-chain network.target 

network.target @606ms 
└─**systemd-resolved.service @431ms +175ms**
  └─**systemd-networkd.service @374ms +54ms**
    └─**systemd-udevd.service @289ms +83ms**
      └─**systemd-tmpfiles-setup-dev.service @268ms +11ms**
        └─**kmod-static-nodes.service @257ms +5ms**
          └─systemd-journald.socket 
            └─-.mount 
              └─system.slice 
                └─-.slice 
**3400G:~ #**
**3400G:~ #** systemd-analyze critical-chain postfix.service                 

**postfix.service +720ms**
└─time-sync.target @695ms 
  └─**chronyd.service @608ms +86ms**
    └─nss-lookup.target @607ms 
      └─**systemd-resolved.service @431ms +175ms**
        └─**systemd-networkd.service @374ms +54ms**
          └─**systemd-udevd.service @289ms +83ms**
            └─**systemd-tmpfiles-setup-dev.service @268ms +11ms**
              └─**kmod-static-nodes.service @257ms +5ms**
                └─systemd-journald.socket 
                  └─-.mount 
                    └─system.slice 
                      └─-.slice 
**3400G:~ #**

By default both ipv4 and ipv6 are enabled on ethernet and wlan.

**3400G:~ #** networkctl  
IDX LINK   TYPE     OPERATIONAL SETUP     
  1 lo     loopback **carrier    ** unmanaged  
  2 eno1   ether    **routable   ****configured**
  3 wlp7s0 wlan     **routable   ****configured**

3 links listed. 
**3400G:~ #**

One of the first things I do after a new install is disable ipv6 in Yast network config, but apparently it doesn’t propagate to postfix config.

Curiosity: The quotation marks make any difference? I changed “30” to 1, but should it be “1”?

Thanks.

Hi
AFAIK “1”

Hello,

The same question here.
Is it a problem if I disable/uninstall/mask postfix ?

It depends. Linux applications use a mail transfer agent for noticing. If you don’t mind delete postfix. In the end deleting everything could solve all your problems.

My machines use postfix for sending and fetchmail for retrieving mail: “yast2 mail”. They are easy to configure and reliable in operation.

Same here, I disabled IPv6 as well, went from wicked/DHCP over NetworkManager (first DHCP, then static IPv4 only) finally to systemd-networkd (minimal IPv4) only. Startup times between those services can be orders of magnitude (1s vs ca 200ms vs 25ms). For me, systemd-netwkorkd is the clear winner here.

Similarly with postfix: I tested it, then exim and found that exim had quicker startups. After a few years without emails, I decided to disable/mask/uninstall exim. Similar: ntpd→chronyd, also SDDM→XDM→KDM.

I blame systemd for rekindling my interest in the boot process. Having wrestled classical init scripts for over 20 years, I grew accustomed to the sad fact that every distro I have to work with professionally and at home, the startup scripts were always different, each distro had its quirks, bugs and packages for them. Systemd changed that: if a bug is found, all systemd-based distros benefit from the bugfix. And I benefit from learning about all that, meanwhile improving boot times from almost 20s down to about 3s (ok, the purchase of a faster SSD happened somewhere between those times as well). :wink:
[HR][/HR]
After 6 years of minimizing startup times as a hobby using mostly systemd/udev/dracut, I have rediscovered another component worthy of close scrutiny:** the kernel itself. **

See, much in openSUSE is derived from the server-centric SuSE-SLES main line. Despite also having SLED as a desktop solution, both the SLES and SLED kernels seem to include everything a Linux server could possibly need: support for NUMA, LVM, btrfs, XFS, hugepages for massive DB (hello, SAP/HANA) and for virtualization setups, quotas and similar resource-management instrumentation. On the other hand, openSUSE being used on mobile hardware regularly, Wifi+Bluetooth+IPv6 are mandatory.

I invested a rainy weekend this summer to customize a kernel exactly for my needs: It was easy to dispose of all the components listed in the previous paragraph. Now when I restart my system, boot times are regularly under one second from Grub2 to KDE+networking. Cold boots are about 1.1 seconds. I’ve been using one primary ext4 partition spanning a whole MBR-partitioned SSD; no EFI boot, no swap, no initial ramdisks.

Not having all that functionality lets the kernel just fly through its initialization, then mount the boot device and invoke systemd. On the other hand, the kernel must now handle whatever initrd-based dracut and systemd previously had managed, in my case patch the microcode (CONFIG_EXTRA_FIRMWARE=“intel-ucode/06-3c-03” in my case, having a Haswell core-i5 CPU) and provide the filesystem driver for the boot device (CONFIG_EXT4_FS=y).

A few days ago, I compiled the 5.9.3 kernel and the newest Nvidia blobs (455.38) in one go, and for the first time it went flawlessly. It’s gratifying for me to still learn more things:

  • Backup frequently so messing up is less painful.
  • Optimizing the kernel for size or for performance only seems to result in different binary sizes. Personally, I didn’t notice any
    differences in boot times or runtime performance of my usual workloads, including compiling a new kernel (which takes about 4 minutes).
  • Despite the ongoing controversy between the kernel developers and Nvidia over EXPORT_SYMBOL_GPL, the newest 455.38 driver works perfectly with 5.9-mainline kernels so far — IFF
    you don’t need CUDA (an obscure command-line option in NVidia’s install script called –no-unified-memory does the trick).
  • It’s invaluable to consistently log results, document changes, measure times and resources, compare findings, and to automate as much of the process as reasonable; but without creating too much extra load to the system under test.
  • Learning and exploiting quirks is fun, but riddle me this: having udev_log set to “debug” in /etc/udev/udev.conf produces almost 4000 lines of dmesg/journalctl messages during each boot — but it lets my system boot about 150ms faster than udev_log=“err” (which leads only to about 700 lines of boot messages total!). Why?

All that joyous tinkering because 6 years ago I was wondering »why does my new rig boot so slowly«?

I stayed off low level tinkering (except for compress=“cat” in /etc/dracut.conf.d/01-dist.conf) and used systemctl only for optimization:

Services running:

  apache2.service                    loaded active running The Apache Webserver                                     
  apparmor.service                   loaded active exited  Load AppArmor profiles                                   
  chronyd.service                    loaded active running NTP client/server                                        
  cups-browsed.service               loaded active running Make remote CUPS printers available locally              
  cups.service                       loaded active running CUPS Scheduler                                           
  dbus.service                       loaded active running D-Bus System Message Bus                                 
  display-manager.service            loaded active running X Display Manager                                        
  dracut-shutdown.service            loaded active exited  Restore /run/initramfs on shutdown                       
  fetchmail.service                  loaded active running A remote-mail retrieval utility                          
  getty@tty1.service                 loaded active running Getty on tty1                                            
  haveged.service                    loaded active running Entropy Daemon based on the HAVEGE algorithm             
  hd-idle.service                    loaded active running hd-idle disk spindown service                            
  irqbalance.service                 loaded active running irqbalance daemon                                        
  kbdsettings.service                loaded active exited  Apply settings from /etc/sysconfig/keyboard              
  kmod-static-nodes.service          loaded active exited  Create list of static device nodes for the current kernel
  lm_sensors.service                 loaded active exited  Initialize hardware monitoring sensors                   
  mcelog.service                     loaded active running Machine Check Exception Logging Daemon                   
  minidlna.service                   loaded active running MiniDLNA is a DLNA/UPnP-AV server software               
  polkit.service                     loaded active running Authorization Manager                                    
  postfix.service                    loaded active running Postfix Mail Transport Agent                             
  rtkit-daemon.service               loaded active running RealtimeKit Scheduling Policy Service                    
  systemd-journal-flush.service      loaded active exited  Flush Journal to Persistent Storage                      
  systemd-journald.service           loaded active running Journal Service                                          
  systemd-logind.service             loaded active running User Login Management                                    
  systemd-modules-load.service       loaded active exited  Load Kernel Modules                                      
  systemd-networkd.service           loaded active running Network Service                                          
  systemd-random-seed.service        loaded active exited  Load/Save Random Seed                                    
  systemd-remount-fs.service         loaded active exited  Remount Root and Kernel File Systems                     
  systemd-resolved.service           loaded active running Network Name Resolution                                  
  systemd-sysctl.service             loaded active exited  Apply Kernel Variables                                   
  systemd-tmpfiles-setup-dev.service loaded active exited  Create Static Device Nodes in /dev                       
  systemd-tmpfiles-setup.service     loaded active exited  Create Volatile Files and Directories                    
  systemd-udev-trigger.service       loaded active exited  Coldplug All udev Devices                                
  systemd-udevd.service              loaded active running Rule-based Manager for Device Events and Files           
  systemd-update-utmp.service        loaded active exited  Update UTMP about System Boot/Shutdown                   
  systemd-user-sessions.service      loaded active exited  Permit User Sessions                                     
  udisks2.service                    loaded active running Disk Manager                                             
  upower.service                     loaded active running Daemon for power management                              
  user-runtime-dir@1000.service      loaded active exited  User Runtime Directory /run/user/1000                    
  user@1000.service                  loaded active running User Manager for UID 1000

Boot times:

    5.375580] systemd[1]: Startup finished in 786ms (kernel) + 2.614s (initrd) + 1.973s (userspace) = 5.375s.
    5.292787] systemd[1]: Startup finished in 785ms (kernel) + 2.492s (initrd) + 2.015s (userspace) = 5.292s.
    5.414146] systemd[1]: Startup finished in 776ms (kernel) + 2.700s (initrd) + 1.937s (userspace) = 5.414s.
    5.408588] systemd[1]: Startup finished in 762ms (kernel) + 2.660s (initrd) + 1.985s (userspace) = 5.408s.
    5.309166] systemd[1]: Startup finished in 822ms (kernel) + 2.525s (initrd) + 1.961s (userspace) = 5.309s.
    5.355593] systemd[1]: Startup finished in 759ms (kernel) + 2.574s (initrd) + 2.021s (userspace) = 5.355s.
    5.333888] systemd[1]: Startup finished in 785ms (kernel) + 2.605s (initrd) + 1.943s (userspace) = 5.333s.
    5.160569] systemd[1]: Startup finished in 782ms (kernel) + 2.556s (initrd) + 1.821s (userspace) = 5.160s.

SSDs are really fast. The system partition uses btrfs:

    3.720311] systemd[1]: Reached target Local File Systems (Pre).
    3.721359] systemd[1]: Mounting /.snapshots...
    3.722381] systemd[1]: Mounting /boot/efi...
    3.723437] systemd[1]: Mounting /boot/grub2/i386-pc...
    3.724513] systemd[1]: Mounting /boot/grub2/x86_64-efi...
    3.725683] systemd[1]: Mounting /home...
    3.727257] systemd[1]: Mounting /home-SSD...
    3.728501] systemd[1]: Mounting /opt...
    3.729613] systemd[1]: Mounting /root...
    3.730813] systemd[1]: Mounting /srv...
    3.731798] systemd[1]: Mounting /usr/local...
    3.732835] systemd[1]: Mounting /var...
    3.740254] systemd[1]: Mounted /.snapshots.
    3.741006] systemd[1]: Mounted /boot/efi.
    3.741710] systemd[1]: Mounted /boot/grub2/i386-pc.
    3.742535] systemd[1]: Mounted /boot/grub2/x86_64-efi.
    3.743211] systemd[1]: Mounted /home.
    3.743884] systemd[1]: Mounted /opt.
    3.747261] systemd[1]: Mounted /home-SSD.
    3.747945] systemd[1]: Mounted /root.
    3.748594] systemd[1]: Mounted /srv.
    3.749220] systemd[1]: Mounted /usr/local.
    3.749817] systemd[1]: Mounted /var.
    3.750694] systemd[1]: Condition check resulted in Lock Directory being skipped.
    3.750734] systemd[1]: Condition check resulted in Runtime Directory being skipped.
    3.750765] systemd[1]: Reached target Local File Systems.

Absolutely, the »compress=“cat”« trick with dracut really works well with SSDs.

For the server infrastructure I’ve been responsible for (some of which my boss paid for SLES SLA’s, 24/7 support etc), I stay away from any tinkering that would invalidate a service agreement.
I consider your system more as a server too (Apache running, many many mounts, probably rare reboots, right?), and as such your boot times are fantastic, Karl.

I hope my next self-built rig will be some AMD Zen3 system for which I plan to commit toUEFI and btrfs. Until then, I’m just glad I can avoid the complexity.
For my current home rig (that only runs when I am at home — which, thanks to the pandemic, increased) though, all bets are off. :slight_smile:
My notes for the next kernel compile, among other things, include:

  • CONFIG_MODULE_UNLOAD — makes kernel faster and simpler (according to the Kconfig help texts)
  • CONFIG_MTRR_SANITIZER — didn’t clean up my boot messages; dump the sanitizer
  • SPARSE_IRQ — all distros I know have this enabled, yet: »If you don’t know what to do here, say N.« (Kconfig again)

Here with a fresh Leap 15.2 install – /etc/sysconfig/network/config –


## Type:        integer
## Default:     30
#
# Some interfaces need some time to come up or come asynchronously via hotplug.
# WAIT_FOR_INTERFACES is a global wait for all mandatory interfaces in
# seconds. If empty no wait occurs.
#
WAIT_FOR_INTERFACES="30"

Forgot to ask (and, more on-topic for this thread): On your system, does postfix ever provide you with any system mail?

I remember times when root was inundated with daily messages from the locally running firewall (intrusion/[D]DOS attempts, dropped packets), httpd (404 access, resource unavailable etc.), failed login attempt by so-and-so, warnings about amost-full filesystems and other things I can’t remember anymore.

Is this still a thing? Even if so, does it really necessitate running postfix all the time?
(The mails still get queued in /var/spool/mail anyways — which, in my case, has been empty for years.)

The only thing I have on some systems which regularly sends mail to root is “Rootkit Hunter” – the rkhunter package.

The following occasionally send mail to root if, something is broken:

  • Cron job scripts;
  • sudo;
  • mdadm.

Host erlangen has an impressive list of devices: http://www.mistelberger.net/erlangen-boot.svg Power is 25W when running the servers only. It resumes nicely from suspend to RAM. I reboot the machine when upgrading Tumbleweed requires it:

erlangen:~ # journalctl -b -u systemd-suspend.service -o short-monotonic  
-- Logs begin at Sun 2020-11-08 20:48:18 CET, end at Mon 2020-11-09 22:02:48 CET. -- 
 3982.731219] erlangen systemd[1]: Starting Suspend... 
 3982.747576] erlangen systemd-sleep[11707]: INFO: Skip running /usr/lib/systemd/system-sleep/grub2.sleep for suspend 
 3982.751324] erlangen systemd-sleep[11705]: Suspending system... 
 3984.112100] erlangen systemd-sleep[11705]: System resumed. 
 3984.117867] erlangen systemd-sleep[11766]: INFO: Skip running /usr/lib/systemd/system-sleep/grub2.sleep for suspend 
 3984.122993] erlangen systemd[1]: systemd-suspend.service: Succeeded. 
 3984.123201] erlangen systemd[1]: Finished Suspend. 
erlangen:~ # 

I hope my next self-built rig will be some AMD Zen 3 system for which I plan to commit to UEFI and btrfs. Until then, I’m just glad I can avoid the complexity. For my current home rig (that only runs when I am at home — which, thanks to the pandemic, increased) though, all bets are off. :slight_smile:
Ryzen 5000 will be nice Q4 2021. The 6700K appeared in Q2 2015. I assembled erlangen in August 2016: https://www.cpubenchmark.net/compare/AMD-Ryzen-5-5600X-vs-Intel-i7-6700K/3859vs2565 More reading warrants less issues when assembling a new system.

Fetchmail by default sends mail when failing. Others need to be configured: https://bugzilla.opensuse.org/show_bug.cgi?id=1130306

I remember times when root was inundated with daily messages from the locally running firewall (intrusion/[D]DOS attempts, dropped packets), httpd (404 access, resource unavailable etc.), failed login attempt by so-and-so, warnings about amost-full filesystems and other things I can’t remember anymore.
I remember the times when ClearCase MultiSite syncreplica would use mail for notification. Early versions even offered mail transport of the sync packages as an alternative to port 371.

Is this still a thing? Even if so, does it really necessitate running postfix all the time? (The mails still get queued in /var/spool/mail anyways — which, in my case, has been empty for years.)
Mail notification is great. Postfix doesn’t run all the times, but sits there and waits. After adhering to the defaults for many years I eventually reconfigured: https://karlmistelberger.wordpress.com/2017/12/26/e-mail-auf-dem-opensuse-desktop/ I used ‘yast2 users’ to forward all system mail to user karl. Thus postfix drops system mail into /home/karl/.local/share/local-mail/inbox/.

Oh I remember rkhunter, I used to have it installed. Yes, that one would send local mail to root regularly.
The http://www.chkrootkit.org/ tools are pretty good at scanning for certain types of malware too, or so I’ve read.

Yes, the impact of Postfix on a system under load may have decreased abit over the years.
I used to administer a few high-traffic corporate mail servers with Postfix (later sendmail too), MailScanner and SpamAssassin a decade ago. Much polling for files in spool directories was noticeable. This may have changed with the fanotify/inotify monitoring mechanism where the kernel can just tell the MTA whenever new data arrives to be spooled. But even then, some polling is required. So yes, Postfix is inactive most of the time if mail traffic is low. But it still has a longer startup time than Exim, and and after starting up, both Postfix and Exim allocate system resources.
Last I checked though, both seem quite well-behaved. Also, as long as there’s next to no mail, if memory pressure gets high enough, Postfix will be one of those non-critical processes to be swapped out anyway. So, apart from the slight increase in boot time, no worries.

Agreed. Back in 2014, I invested over a month of studying cost-effective hardware before I build my current rig. It has turned out almost perfect, and my knowledge about personal computing and market prices was refreshed, so the time was worth it.
Still, I managed to buy a mainboard that delivered USB3 speeds only with custom Windows-only drivers — with Linux, I got only USB2 throughput. (ASUS, the manufacturer, only mentioned that one in the fine print!) — Oh well, we live and learn. I plan to be even more thorough next time around with Ryzen, nvme/M-2, DDR4 RAM timings, UEFI boards with AMD sockets and so on.

Wow, that’s quite a tall graph, compared to mine from this morning.
Both graphs are nice and narrow on the time axis though, which is great!

https://susepaste.org/images/36132508.png

“rkhunter” is in the main openSUSE repositories – “chkrootkit” is only in the Security project – <https://build.opensuse.org/project/show/security>.

  • “rkhunter” was last updated 10 months ago.
  • “chkrootkit” was last updated over 1 year ago.

[HR][/HR]“You pays your money and you takes your choice.” – Aldous Huxley, Brave New World – or, fair ground buskers …

Postfix is the tiniest of all users on the machine:

**erlangen:~ #** top -bn1 -u postfix 
top - 13:53:40 up 16:57,  3 users,  load average: 0.52, 0.72, 0.49 
Tasks: 263 total,   1 running, 262 sleeping,   0 stopped,   0 zombie 
%Cpu(s):  0.8 us,  0.8 sy,  0.0 ni, 98.5 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st 
MiB Mem : 31793.69+total, 22007.57+free, 2812.000 used, 6974.121 buff/cache 
MiB Swap: 16383.99+total, 16383.99+free,    0.000 used. 26613.87+avail Mem  

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND 
  861 postfix   20   0   44036   9996   8860 S 0.000 0.031   0:00.07 qmgr 
18553 postfix   20   0   43624   8900   7968 S 0.000 0.027   0:00.01 pickup 
**erlangen:~ #**