Has anyone tried the new Task Scheduler which integrates kcron?
I have created many tasks which kick start different shell scripts at different moments. Most of them are ‘rsync’ scripts for backup or other archiving process.
But one of them is a cleandrive script, which is written the following:
sudo dd if=/dev/zero of=/0bits bs=20M
sudo rm /0bits
I have tried with ‘exit’ instead of ‘done’ and nothing at all as well.
Running this manually generates no problem.
generates no problem.
But using the Task Scheduler, either by clicking on RUN NOW button or letting the schedule do it, the script jams on the ‘rm’ command for about 2mins until it ends it and I have a hard time killing it as the process ‘rm’ does not kill, I have to find a parent process or something. When it jams, it writes constantly on the drive (don’t know what, cuz I have 0bytes of free space, must be some kind of loop) and it crashes plasma and my screen becomes all black. I have 0bytes free on the root partition for 2mins (tmp and home are on other partitions).
As you notice, I do not have the usual
at the beginning of my script.
Is that the problem? Is cron/task scheduler running sh or bash by default and it should be the other one for my script? I will try tonight, but curious to understand how it could jams 2mins on the ‘rm’ command and not on another one. Not sure #!/bin/bash or sh will solve the problem either.
You realise that
dd if=/dev/zero of=/0bits bs=20M
will keep expanding the /0bits file with zero bytes until the whole disk fills up? You seem surprised that there is no disk space left eventually. Did you expect that it would stop after 20MB? That’s not what the bs= argument means. If you wanted it to stop after a certain number of blocks you need the count= argument.
It’s not a good idea to fill up the disk, even if you want to zero the free space. It causes grief to various processes, e.g. syslog.
Yes I am well aware, it’s the reason why I post, cuz it’s not good when the disk is filed up (for too long, though).
It’s the purpose of the script. To fill zeros until it’s filled up and then the rm command deletes the 0bits file (which was created by chunks of 20megs until no space left on disk).
This reduces image size (and time to complete) when I do a dd image backup using g4l bootable CD or backing up manually from a second disk (the root partition must not be mounted when imaging byte-to-byte with dd).
When run manually, once it fills up the disk the dd process creates an error (of course), ends, and then the rm process deletes the file, so the disk is filed up for say 2 seconds, 3 to the maximum, which minimizes risks. But if the rm command does not complete, then there could be problems, as I experience now.
The script has proven reliable every week for the past 2 years using Kcron in 10.3. Now Kcron is merged into the Task Scheduler and I experience that “rm looping” thing.
I have never found a better way to reduce image size and backup time other than writing zeros on the unused space of the disk.
In any case:
*done *belongs to a preceding do, thus on its owen it is nonsense.
A plain *exit *at the end of script is not needed, when the execution reaches there it stops in any case.
A script is only a script when it has a proper shebang, else it is just a series of statements (which can be a usefull things of it own, e.g. when sourcing it in a script).
So I am definitely better off trying with the
line on line 1.
Kcron never had a problem to run scripts without this line, so I never bothered adding it. Maybe its backend processor was different than Task Scheduler, but I agree I am not following the rules without that line.
I did not say it would not run and do as you expected (maybe the defaults of the defaults are just working in your favour). I was only saying that you must not call a buch of statements a script. Being inpresice will not help to make a programmer out of somebody.
Now at least you tell the system that it is a POSIX shell script.
I also do not understand why you test with
When you want to execute sometthing (being it a script or a binary) you make it executable for those who may execute (owner, group, others) with:
chmod ugo+x cleandrive.sh
and then call
when the path to it is in your PATH, or else
Doesn’t this look like a real program call?
But all this, basic as it may be, is not sure to remedy your problem.
But normaly I do not venture deeper into a problem when even the most basic things are not correct.
file is executable, it is the script file I am running, launched by Kcron in my older 10.3 version.
‘cleandrive.sh’ only has a bunch of statements in it, yes. Kcron never complained about it, but Task Scheduler might work differently and since my file was not written like a good script should have, maybe that explains why Task Scheduler does not run/end it properly.
I test with
maybe due to an habit, that’s how I test all my scripts. Sometimes I use
in front too. Both usually work. Is it bad to use ‘sh’ instead of ‘./’?
May be not ‘bad’, but it shows lack of understanding. When your script has the x-bit set, why typing those three characters?
And, you call sh, which is the POSIX shell. Now, when your shebang says *#!/bin/bash *(thus not in the case above, but most people in Linux have *bash *scripts, not sh), the POSIX shell (sh) must call the Borne Again shell (bash). All unneeded overhead.
Understood. I am not an expert in scripting.
I will change the way I work with scripts and make it more fluent and optimal. Should then work with Task Scheduler.
It should then work always!
I do not know your task manager, but I think it is a GUI wrapper around your crontab.
Yes, it is either KDE’s Task Manager or openSUSE’s Task Manager, accessible from the Personal Settings panel>Advanced tab.
It can manage System Cron jobs and Personal Cron jobs.
Nice, but I am afraid that I stick to
Fast and efficient rotfl!
I hope that you now have working what you wanted.