Cleanup disk and defragmentation

Hello!

How to clean up disk in Linux - clean junk, temp. files, etc’
In windows this is a simple - there be a lot of tools to do that.
Is there a similar tools for Linux to cleanup disk an get a more free space?

I read on this forum how to set up suse to clean tmp folder on boot.
I follow the instructions and…


linux-44fg:~ # /etc/sysconfig/cron
-bash: /etc/sysconfig/cron: Permission denied
linux-44fg:~ # 

Why?

Secon question - defragmentation.
Is there a need to defragment disks under Linux?
And is it need, how to do this?

No need to defrag in Linux

Did you follow this
Clear Temp Files at Boot

On Tue, 13 Sep 2011 16:06:03 +0000, glock356 wrote:

> Hello!
>
> How to clean up disk in Linux - clean junk, temp. files, etc’ In windows
> this is a simple - there be a lot of tools to do that. Is there a
> similar tools for Linux to cleanup disk an get a more free space?

There are a few things you can do -

  1. Use tmpwatch to automatically clean up the /tmp folder.
  2. Use a tool like filelight to find directories that have lots of files
    in them - useful for cleaning up data directories (but I wouldn’t use it
    to clean up system directories)
  3. Use a tool like rpmorphan to find orphaned RPMs that are no longer
    needed.

> I read on this forum how to set up suse to clean tmp folder on boot. I
> follow the instructions and…
>
>
> Code:
> --------------------
>
> linux-44fg:~ # /etc/sysconfig/cron
> -bash: /etc/sysconfig/cron: Permission denied linux-44fg:~ #
>
> --------------------
>
>
> Why?

Because you’re trying to execute “cron”, but /etc/sysconfig/cron is a
configuration file, not an executable file.

> Secon question - defragmentation.
> Is there a need to defragment disks under Linux? And is it need, how to
> do this?

No, there isn’t a need to do this on Linux.

Jim

Jim Henderson
openSUSE Forums Administrator
Forum Use Terms & Conditions at http://tinyurl.com/openSUSE-T-C

Since you are coming from Windows to the powerful world of Linux it stands to note why old windows habits are no longer necessary under Linux.

  1. File Fragmentation: Windows uses space freed on disk to save new info and when that space isn’t big enough it saves part of the info then moves to next available area and so forth until all the new info is saved. This is marginally satisfactory on a single user system but is totally unacceptable on a system geared to multi-user. As fragmentation gets more severe, the system access to the files becomes really slow so the user must defrag the system (which recombines all the files and compacts them as best it can with the system still using and changing files like swap space).
    In Linux, when files are saved, the additional parts of files are saved to free space that is big enough to accomodate the new file portion which reduces fragmentation. Additionally, at each file access, Linux looks for fragmented files which are not in use and recombines them into space big enough to hold the whole file and also looks for not in use files that can be moved to the front of the drive to further speed up access. Linux uses a separate swap partition to prevent disk swaps from causing fragmentation of user and system files.

  2. Cleaning temp folders, is as others have posted. You can set Linux to clean tmp/ at boot and can clean-up user files if you are either root or the specific user that owns the files. There are quite a few programs and scripts to handle the job inclusive of ones to find and remove duplicated files. As a regular user, you will be issued a ‘permission denied’ if you try to run any program or script that you do not have permission to use.

  3. Before you ask, you have left the world of wasted memory, and a huge cluttered and badly maintained registry. You will no longer need to clean, fix, or otherwise mess with the registry as there is none. Under windows, the registry holds the settings for each and every program and each and every user of the system in one big never ending file. As programs get added the registry expands but as programs are removed or users removed, the registry is seldom cleaned up or compacted properly. This registry wastes a lot of valuable memory to hold values for programs that a user may use once in a blue-moon. Under Linux, the settings for programs used by users are stored either along with the application or in the user space on the disk. When a user starts a program, the associated values are brought into memory and removed from memory when the application is closed. Changes to the settings occur on the disk as needed and when the application is removed so does the associated setting files. This keeps Linux humming along perfectly with the maximum amount of memory and disk space available for all sessions, users, and applications.

On 09/13/2011 08:36 PM, techwiz03 wrote:
>
> Since you are coming from Windows

that is very nicely done!

i’m gonna save a pointer to it <http://tinyurl.com/6h3rgzk> so i can
reference it to those that follow with similar questions. (you invented
that mouse trap from a knowledge base i don’t have–thanks!!)


DD
openSUSE®, the “German Automobiles” of operating systems

Since you are coming from Windows…

…you might be fixed to a somewhat Windows-ish way of doing some things. In that case, I could maybe recommend ‘bleachbit’; I didn’t like it 'coz it looked rather like a Windows program converted to Linux, but YMMV.

I also didn’t get the idea that you would save as much space as was used up by installing program itself, but if you get satisfaction from doing these kind of things, you won’t even think about that.

I still feel that manually going in and having a look around for massive space consumers is more constructive, all round, but that probably doesn’t give you the same feeling of satisfaction of having pushed a button to keep your system in shape.

Is there a need to defragment disks under Linux?

While technically the answer to that is ‘it depends’, practically every Linux user worries about it at first, and then stops worrying when it conspicuously fails to become a problem.

(What it depends upon are what type of disk you are using (Hard Disk, SSD, pen drive) and which disk format you have chosen. Effectively, for Nillfs2, I can see how it can ever become a factor, because it gets dealt with ‘automagically’ (this is not enough of an argument, by itself, for adopting a radically different filesystem, but just saying that if you did have such an argument, you might feel that this is an additional benefit); for recent ext filesystems, there will be some fragmentation, but it will be small in almost every real case; for the old Reiser3 FS, there were some complaints about nearly-full filesystems becoming lower in performance that may have been somewhat alleviated by a defrag (but not getting a Reiser3 close to full may well have been a better preventative). In reality, the summary is ‘not worth the effort’, except for really exceptional use cases.)

The problem with Reiser3 FS and even ext FS is that on a nearly full filesystem there is often a lack of a free space large enough to compact a large file into a single contingent block without resorting to the old speeddisk method from dos days where you move part of the info free it’s space find and move a smaller file into the space just cleared, and keep going until there is enough space to finally move the whole original file. At some point, there will come a situation where you just can’t squeeze one more file onto the drive. Unfragfs has long since disappeared but was a handy Linux tool meant to run from a bootable CD. It would graphically show you the fragmentation of any Linux, Windows, or MAC OS9 file system and permit you to unfragment a large file to another partition or drive, re-arrange and defrag the balance of the drive and then put back the moved file. As the costs of LFM’s (Large formated media) came down it became easier and simpler to just add a drive and mount it or transfer whole partitions across to make more space available. Guest most people aren’t interested in the tedious file by file defrag since Linux does it so well until the disk is almost full and then it’s time to just put in a larger disk.

On 2011-09-13 18:06, glock356 wrote:
> I read on this forum how to set up suse to clean tmp folder on boot.
> I follow the instructions and…

I doubt you followed the instructions :slight_smile:

> Code:
> --------------------
>
> linux-44fg:~ # /etc/sysconfig/cron
> -bash: /etc/sysconfig/cron: Permission denied
> linux-44fg:~ #
>
> --------------------
>
>
> Why?

Because you can not execute a non executable file.


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)

File fragmentation is filesystem issue, not operating system issue.

EXT4 filesystem has usually very low fragmentation, but it is not immune to it.

BTRFS, however, uses COW and can suffer from fragmentation significally.
But there is auto-defrag and even online defrag tool available for BTRFS already, so it shouldn’t be an issue anymore.
You can also disable COW to reduce fragmentation.

So, fragmentation is not something unheard of in linux world, but with EXT4 you don’t need to worry much.

As for cleaning your disk, you should only care about your home directory. System directories are usually not accessible for writing during normal usage, so almost nothing can mess there.

But home directory can become a junk after a while. For example Openshot saves lots of temporary data and never deletes it.

On 2011-09-16 11:26, sobrus wrote:
>
> File fragmentation is filesystem issue, not operating system issue.

Mmmm…

nimrodel:~ # fsck /dev/sdb1
fsck 1.40.8 (13-Mar-2008)
e2fsck 1.40.8 (13-Mar-2008)
Moria_250 has been mounted 1574 times without being checked, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 3A: Optimizing directories
Pass 4: Checking reference counts
Pass 5: Checking group summary information

Moria_250: ***** FILE SYSTEM WAS MODIFIED *****
Moria_250: 5872/30408704 files (62.6% non-contiguous), 34630399/60791960 blocks
nimrodel:~ #


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)

On 09/16/2011 09:03 AM, Carlos E. R. wrote:
> On 2011-09-16 11:26, sobrus wrote:
>>
>> File fragmentation is filesystem issue, not operating system issue.
>
> Mmmm…
>
>
> nimrodel:~ # fsck /dev/sdb1
> fsck 1.40.8 (13-Mar-2008)
> e2fsck 1.40.8 (13-Mar-2008)
> Moria_250 has been mounted 1574 times without being checked, check forced.
> Pass 1: Checking inodes, blocks, and sizes
> Pass 2: Checking directory structure
> Pass 3: Checking directory connectivity
> Pass 3A: Optimizing directories
> Pass 4: Checking reference counts
> Pass 5: Checking group summary information
>
> Moria_250: ***** FILE SYSTEM WAS MODIFIED *****
> Moria_250: 5872/30408704 files (62.6% non-contiguous), 34630399/60791960 blocks

Yes, but your average file size is 3 MB. That will lead to fragmentation. For
example, I have an xfs file system that is used as the data storage partition
for a MythTV system. As xfs has defrag tools, I do a defrag every night. The
latest results are:


finger@desktop:~> cat /var/log/xfs_frag
actual 291052, ideal 290618, fragmentation factor 0.15%
from      to extents  blocks    pct
1       1    7039    7039   0.01
2       3    5875   14010   0.01
4       7    5009   25904   0.02
8      15    5314   67326   0.05
16      31    1383   32912   0.02
32      63    1081   47625   0.04
64     127     699   62998   0.05
128     255     972  181789   0.14
256     511    1430  518285   0.39
512    1023     941  692666   0.52
1024    2047     583  849007   0.64
2048    4095     296  834869   0.63
4096    8191     156  886268   0.67
8192   16383     122 1389374   1.04
16384   32767      68 1608161   1.21
32768   65535      62 2928483   2.20
65536  131071      54 4953185   3.72
131072  262143      35 6796385   5.11
262144  524287      25 9070766   6.81
524288 1048575      12 9698343   7.29
2097152 4194303       4 12342777  9.27
4194304 8388607       1 5050353   3.79
8388608 16777215      1 12925951  9.71
16777216 33554431     2 62123396  46.67

Even with aggressive defragmentation and a file system that is only 46% full,
the largest files cannot be defragmented.

On 2011-09-16 18:30, Larry Finger wrote:
> On 09/16/2011 09:03 AM, Carlos E. R. wrote:

>> Moria_250: ***** FILE SYSTEM WAS MODIFIED *****
>> Moria_250: 5872/30408704 files (62.6% non-contiguous), 34630399/60791960
>> blocks
>
> Yes, but your average file size is 3 MB. That will lead to fragmentation.

Averages are misleading. It has thousands of groups of three files: 90 MB,
3K and 70 bytes.

My point is that ext2/3 can have a lot of fragmentation, too.

> For example, I have an xfs file system that is used as the data storage
> partition for a MythTV system. As xfs has defrag tools, I do a defrag every
> night. The latest results are:

My disk above is part of a digital TV recording system, too.

> Even with aggressive defragmentation and a file system that is only 46%
> full, the largest files cannot be defragmented.

MsDos defragmenters coped with any size :slight_smile:


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)

EXT4 uses large extents to allocate files, it should be less fragmented than earlier versions.

And my point was that Windows with EXT4 would not need (usually) defrag tools too.

I’ve read it once about like this: like any filesystem, linux file systems fragment, they just suffer less. As Carlos shows. Don’t know much about the details.

On 2011-09-16 23:56, sobrus wrote:
>
> EXT4 uses large extents to allocate files, it should be less fragmented
> than earlier versions.

Perhaps.

> And my point was that Windows with EXT4 would not need (usually) defrag
> tools too.

Windows with a different allocation algorithm would also fragment less. The
important thing is how you implement the format, not the format itself.


Cheers / Saludos,

Carlos E. R.
(from 11.4 x86_64 “Celadon” at Telcontar)