Following from this thread, but made a new one since issue is different indeed.
This week I had noticed my system took a bit longer than before to start (plymouth took longer to finish).
Last time I saw this behavior was when still doing experiments on Leap 42.3, by switching from Network Manager to Wicked network service. When hitting Esc key to switch from plymouth to console I noticed (IIRC) Wicked needed to set some “start jobs”, which took some additional seconds.
But this is no longer the case; I stick to Network Manager full time. So again I pressed Esc to plymouth and found out the process that’s now taking additional seconds to start:
a start job is running for Flush Journal to Persistent Storage ...]
I have no idea if Leap 42.3 does this ever since clean installation; never checked for it out of fear of pressing Esc key… But if it does, it’s now taking longer to finish since this week, out of the blue…
I had not even run updates just until today. Neither updates fixed it.
Must coredumpctl be run as root, or list can be run as normal user?
Do you need me to post something from coredumpctl? But don’t core dumps -memory dumps- store sensitive data?
My /var/log/journal/ directory is 161 Mb. Is this too much or very short?
So, /var/log/journal/ directory is 161 Mb
“coredumpctl list” output gives 9 entries, oldest dating from circa 2 weeks ago
No, I have never ever touched journald.conf. This thread is the first time I have ever talked about this kind of stuff.
Log files themselves are in subdirectory with name based on machine UUID.
In general, systemd starts journald as soon as possible; journald starts with logs in RAM. When file systems are mounted, journald copies content from RAM to permanent storage. The only way to avoid it is to disable persistent storage, but then you lose possibility to analyze historical logs (they are lost after reboot).
It is impossible to give blanket answer to “why it takes so long”. For a start, we do not even know how long it takes and why you think it is not normal.
Total plymouth screen (from when it starts to when it ends) time: 1 minute exactly.
Time it takes since plymouth finishing to login screen ready: 21 seconds.
Total: 1 minute 21 seconds to get to login screen since moment plymouth starts.
Now I’m a bit more sure system does take a bit longer than circa 2 weeks ago; I knew I felt something… And all of this out of the blue without doing any tweaking to configurations or system updates at all.
journalctl -b | grep "Time spent on flushing"
oct 16 13:06:35 linux-cmn7 systemd-journald[434]: Time spent on flushing to /var is 31.907949s for 917 entries.
31 seconds to flush, W T H !?
In journald.conf file all fields are commented out, meaning it’s taking all defaults. So I read the respective manual page, looked for the terms SystemMaxFileSize and SystemMaxUse, and at the end of terms concluded max file size must be taking 512 Mb, given a root file system of 40 Gb.
So the files I have are within range in this case, but aren’t they still a bit large?
By the way, I do have a strange habit of powering off and turning on system again often several times a day…
journalctl --vacuum-time=1d
it only deleted a couple of journal files but didn’t free that much space (just circa ~50 Mb). Used the variant size=50M, and it does nothing!! “0B were freed for journal”
journalctl -b | grep "Time spent on flushing"
oct 16 13:16:21 linux-cmn7 systemd-journald[434]: Time spent on flushing to /var is 34.216276s for 933 entries.
user1@linux-cmn7:~> ls -lh /var/log/journal/<long-UUID-number>
total 121M
-rw-r----- 1 root systemd-journal 104M oct 16 13:33 system.journal
-rw-r-----+ 1 root systemd-journal 16M oct 16 13:27 user-xxxx.journal
So all along it’s been only logging to the same pair of files, and I cannot do anything to stop it!!?? I clearly just proved that journal logging is consuming boot time, and it increases with each boot. Can anyone help please? I don’t want this!
Hi
So, you have reviewed the current logs (journalctl) to see there is nothing untoward happening on your system? You ran systemd-analyze blame and critical-chain to confirm there is nothing masking your issue since things happen in parallel?
systemd-analyze critical-chain
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.
graphical.target @57.164s
└─display-manager.service @54.722s +2.441s
└─time-sync.target @54.612s
└─ntpd.service @53.506s +1.105s
└─network.target @53.469s
└─wpa_supplicant.service @56.399s +114ms
└─basic.target @48.746s
└─sockets.target @48.746s
└─virtlogd.socket @48.745s
└─sysinit.target @48.740s
└─systemd-update-utmp.service @48.713s +26ms
└─systemd-tmpfiles-setup.service @48.653s +43ms
└─systemd-journal-flush.service @6.501s +42.110s
└─var-log.mount @5.783s +676ms
└─dev-disk-by\x2duuid-a3bdf827\x2dbb1d\x2d4b02\x2d9a71
Question: are there consequences for running journalctl and systemd-analyze commands as root? Or they should be run as root anyways? Please don’t ignore this question please
Hi
It’s your system, that’s something you need to decide?
In my experience running admin tasks as root user is a given for me, I don’t use sudo for any maintenance tasks, always su - …
Running systemd-analyze works fine as user or root, journalctl must be root user to get to all the good bits (and all users), but will work as user for their processes…
So then no harm to my system by having run systemd-analyze as root user?
I really thought I had made my system automatically exploitable in some way by doing this!
I WANT TO BE AS PERFECT AND SAFE AS POSSIBLE!!
Isn’t this someone does deserve as long as one is a good guy!? Yes, I was banned and learned lesson. Wasn’t that enough!!??
And back to the point, now that I posted the outputs of asked commands, can you still help regarding journal flush big problem?
I’m tented to just rm the journal files!!
I would be tempted as well to remove the logs as long as nothing untoward in them and restart systemd-journald, then reboot?
systemctl restart systemd-journald
systemctl status systemd-journald
Just another note see the + at the end of your user journal file, that means extra attributes are set, in this case it’s your user and group so you can access the file as your user…
Run getfacl against the file to see the extra information.
So, malcolmlewis or anyone else, after analyzing all info I have posted (systemd-analyze and journalctl outputs, and size of my current journal files) would this be indeed one of the best advises, removing the journal files? I don’t want to make any mistakes anymore! This laptop I use with Leap is borrowed at a certain point and I’m no longer allowed to re-clean-install Leap from zero again!
Also, I was already given one answer, but would like just another confirmation.
Is there harm to my system for running unnecessary commands as root, precisely such as systemd-analyze?
Though journalctl and lsattr did require root permissions…
[QUOTE=F_style;2842010][/quote]
OK, so you have btrfs and /var/log is on separate subvolume. At this point I’m afraid it becomes btrfs-sepcific question and requires rather low level knowledge to answer. You may want to post to btrfs list to get some guidance. It’s possible that your filesystem became heavily fragmented (do not forget that all subvolumes share the same physical space) so each new write becomes expensive. But checking for that is not entirely trivial.
Though journalctl and lsattr did require root permissions…
No, they do not. They just require access to corresponding files.
Heavily fragmented!? How’s that? I thought Linux’s file systems in general had automatic background defragmentation tasks so one didn’t need to worry unlike Windows! Or was it just ext4?
If I have fragmented system somehow, how the heck is one supposed to defragment system on Linux?
Also, you forgot to answer this:
“Also, I was already given one answer, but would like just another confirmation.
Is there harm to my system for running unnecessary commands as root, precisely such as systemd-analyze?”
Immediately did ls on the folder itself and noticed files were created just again, just way lighter (8 Mb each).
Anyway, I rebooted yet again, and when hitting Esc key to plymouth to see all boot logs, journal job no longer took long! In fact, it succeeded almost at the instant, effectively reducing total boot time just as I recall it.
But this also means I’ll have to do this “maintenance” job with the journal files each while, if I’m afraid of tweaking journald.conf…
Btrfs article on Debian wiki talks precisely about this issue: btrfs gets slower and sort of “fragmented” due to its use of copy-on-write (COW). They mention the /var/log/journal/ folder indeed. They advise for manually defragmenting using the corresponding btrfs command, but I read somewhere else defragmenting COW files results in duplicates and higher disk usage…
@arvidjaar:
Even though already given one type of answer, is there harm to my system for running unnecessary commands as root, precisely such as systemd-analyze?
Could you help with this one at least?