More recently I have noticed, a couple of times, an «xz» process that just runs on for ‘ever’, using 25% of the cpu; one core on full load I presume.
The process is owned by root, but I have not checked ppid. The last time this occurred was yesterday evening; I haven’t noticed any pattern concerning what or when this process invokes yet, though.
This behaviour may be perfectly normal for all I know, and something that regularly occurs; though I am unsure, this brings the system fans up at max as well so it is quite noticeable when it drags out.
My question(s): should I pay closer attention to it or does the system regularly initiate «xz» (compression/decompression, is it?), and is it normal that these processes takes a long long time to finish (yesterday I forced it to stop after maybe half an hour)?
NB
I’ve been speculating whether «xz» is connected with rkhunter, it may be, perhaps, but rkhunter was not running yesterday when I terminated the process.
Not very clearly laid out, but thanks for any answers.
I am in the process of recovering an HD using ddrescue, which will probably take another year or so, and it seems like every read/access failure (I presume) onto the disc is logged. It’s a very long log file so I can’t be sure, but logged ‘failed access attempts’ entries goes on and on; referring to the last message-xxxxxx.xz, (thank you Malcolm).
Actually, I think I may avoid most of the logging by adding a parameter to the ddrescue command, limiting failed read attempts on the same sector/block (don’t know what the smallest logical/physical?] hard-drive quantity is called - archè logos:O).
BTW, is it possible to split, or pull a sector out of, a file to make it easier to handle in cases like this (this file was 10.2 GiB - the largest (compressed one), given same compression ratio, would have been approximately 30GiB - there was an uncompressed part which was even larger)?
Or any other better method, for that matter?
On Fri, 29 May 2015 12:46:01 +0000, F Sauce wrote:
> Hi
>
> More recently I have noticed, a couple of times, an «xz» process that
> just runs on for ‘ever’, using 25% of the cpu; one core on full load I
> presume.
> The process is owned by root, but I have not checked ppid. The last time
> this occurred was yesterday evening; I haven’t noticed any pattern
> concerning what or when this process invokes yet, though.
> This behaviour may be perfectly normal for all I know, and something
> that regularly occurs; though I am unsure, this brings the system fans
> up at max as well so it is quite noticeable when it drags out.
>
> My question(s): should I pay closer attention to it or does the system
> regularly initiate «xz» (compression/decompression, is it?), and is it
> normal that these processes takes a long long time to finish (yesterday
> I forced it to stop after maybe half an hour)?
>
>
> NB I pondered if «xz» may have been connected with rkhunter, it may,
> perhaps, but rkhunter was not running yesterday when I terminated the
> process.
>
>
> Not very clearly laid out, but thanks for any answers.
>
> Olav
pstree can be very helpful to find out what is using the xz instance
that’s causing the issue.
I just wondered if there were good ways of handing, reading basically, large text files; e.g. divide, if possible, the text file into 10 new files for instance. Kate crashed when I tried to launch it there, so I used less. It was a strange question I suppose.
On Fri, 29 May 2015 16:16:01 +0000, F Sauce wrote:
> Interesting tool.
>
> Thank you!
You bet. If you use ‘pstree -p’ you’ll get the PIDs as well, and if you
pipe it through less, you can search for the process that seems to be the
issue. That’ll make sure you’re looking at the right one.
> Hi
>
> I just wondered if there were good ways of handing, reading basically,
> large text files; e.g. divide, if possible, the text file into 10 new
> files for instance. Kate crashed when I tried to launch it there, so I
> used less. It was a strange question I suppose.
>
> How do you read large log files?
With less. If it is compressed, “zless”, but it should happen
automatically. Or you can search for a pattern using grep (or zgrep) and
pipe the result into another file.
xz is run by “logrotate”, which in turn is being run by a systemd timer
in 13.2, and by a cron job in 13.1 or older. It takes a lot of cpu
because the messages file is very large, and xz compresses a lot.
And syslog is simply recording the kernel error messages generated when
trying to read a bad sector from the damaged disk that you are trying to
recover.
It would be best that instead of using dd_rescue you used dd_rhelp,
which is a script that in turn calls dd_rescue, but using a pattern that
minimizes time and failed reads. The idea is that first it copies all
good areas of the disk, and then progressively zeroes on bad areas. It
tells you what percent it has copied. For instance, in just a few hours
it might copy 99% of the disk, and the remaining 1% might take hours,
and the last 0.1% days. You can simply stop it when you had enough.
The partition / was so full that if I had extracted both the last warn and messages files in the /var/log dir I probably would have crashed the system:) Root is on a 120GB SSD and before I cleaned up the logs I backed those two files up to a back-up hard-drive, then extracted them there with ark (I didn’t know about zless, and didn’t know I would be able to read a file in its compressed state that way).
> The partition / was so full that if I had extracted both the last
> -warn- and -messages- files in the /var/log dir I probably would have
> crashed the system:)
And it did not crash before because the system was rotating the logs
> Root is on a 120GB SSD and before I cleaned up the
> logs I backed those two files up to a back-up hard-drive, then extracted
> them there with ark (I didn’t know about zless, and didn’t know I would
> be able to read a file in its compressed state that way).
Mind, I do not know if it is uncompressed it /tmp or just a chunk in
memory. Because zless can page forward or backwards, and I can’t imagine
how to do that unless the whole is uncompressed.
> Concerning the recovery tool:
> It isn’t the old dd_rescue I’m using, I’m using this one:
> http://www.gnu.org/software/ddrescue/
> Actually, I prepared for using dd_rhelp (actually a tip I got from you,
> a while ago, Carlos), but on its home-site the author recommended this
> tool instead of his:
> http://www.kalysto.org/utilities/dd_rhelp/index.en.html, so I just went
> along with the recommendation.
Ah, I see, yes, the gnu version. I rather like the old version, although
the new version may be better. I never know by looking at the name which
one is which
You should also check the size of the systemd journal, it may be huge. I
suppose it auto-purges, I know not how.