I took this to IRC and learned a thing or two. This is the draft of a blog-post I scheduled for later this year:
OpenSuSE and logging: no more syslogd; journald is default, you can use rsyslog or syslog-NG as syslogd replacementsPosted by jpluimers on 2016/11/15
In the 1990s and early 2000s I did a lot of Unix-Like (Minix, SunOS, HP-UX, Xenox) and later Linux (mostly RedHat and SuSE) work. The internet and Linux weren’t as big as they are now and old stuff was still in use including syslogd.
So recently wanting to do more on the Linux side of things using OpenSuSE (as 15+ years ago, I spent most of my time with SuSE Linux) and assumed logging was still done using syslogd likeMac OS X does.
Boy, I was wrong. Like the internet and lots of other things, logging on OpenSuSE has fragmented in at least these three categories of which two syslog implementations (but syslogd is deprecated and – according to the URC #SUSE Channel – unmaintained):
journald (installed by default on my Tumbleweed text-only systems)
rsyslog (which is supposed to be default on modern OpenSuSE installs but somehow isn’t on my Tumbleweed but is on 13.1 and 13.2)
Most distros today are moving to systemd, so the journal will likely become the local syslog standard.
You can export the journal to legacy syslog files using rsyslog or syslog-ng, I believe syslog-ng is a later creation with performance and feature enhancements. Either should likely be sufficient if you intend to aggregate syslog data from multiple machines (but you should research the aggregation app for its own peculiarities. Many will have their own plugins supporting various things).
The journal has a number of advantages over the legacy syslog files…
Aggregate numerous logging sources into a single database so you don’t have to locate the correct log files
Reports (eg status, errors) can benefit from multiple sources giving you better cursory analysis of events
Stored on a database, so should consume far less disk space with search benefits. There are a number of ways you can search which are specific to database searching instead of the old text-parsing methods.
The disadvantage(s)
Many legacy analytical tools were build on the syslog file file format.
I don’t know that there is any kind of “journal” format that supports direct export into syslog aggregators. You have to either export to the syslog format or stream the data.
When you ask “where journald is located” you’d need to be specific about whether you’re talking about the app or the data. Strictly speaking, a daemon is an app, not the data. On a systemd system, the Unit files are configuration files so in this case you’d likely find answers to everything you asked relating to jouranld by inspecting the following Unit file
And, you can see the run time aspects of this service by querying the “status”
systemctl status systemd-journald.service
In general nowadays, unless you’re using a legacy or aggregation tool I don’t know that there should be a reason to use the syslog format, stick with using “journalctl.”
Thanks a lot for the elaborate answer. I do like the journal, but I can’t seem to find out how to query entries that are older than the last reboot. Hence my “persisted” question.
Maybe I should have phrased it differently:
how can I query the journal entries from before the last reboot
If you don’t want to query the entire journal database, you can specify the reboot number to access entries from older system sessions.
You can also specify dates and times (older than or newer than) and between specific dates/times.
In general, journalctl search parameters are extremely flexible and fast, the only thing I find missing is that there aren’t ways to specify the inverse of any definable search.
Although there are probably ways to purge journal entries, I haven’t paid much attention to that… So far, I haven’t had a situation where I felt that journal data became excessively large. Because text type data is stored in binary form, not only is data highly compressed but removal of anything short of massive amounts of entries probably won’t make much difference.
Also, from a number of perspectives… primarily security, provisioning, and maybe performance… nowadays it’s critical to save every scrap of data for possible later analysis. Data removed is data lost, which might be critical to successfully analyzing a “bigger picture.” There are a number of things that might require data over months, or even years and multiple years that can’t be found without enough data points.
See man journald.conf, in particular SystemMaxUse=, SystemKeepFree=, SystemMaxFileSize=, SystemMaxFiles=, RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize=, RuntimeMaxFiles= and MaxRetentionSec=