Oversize /home directory

I just installed suse 11.3 on formatted partitions (5GB swap, 30GB / and 500GB /home). Just after the installation, My computer showed 25.2GB of /home to be used. When I do


dyn-0a2a1f40:/ # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda6             30G  4.6G   24G  17% /
devtmpfs              2.0G  280K  2.0G   1% /dev
tmpfs                   2.0G   16K  2.0G   1% /dev/shm
/dev/sda7             493G   11G  457G   3% /home
/dev/sda1             100M   25M   76M  25% /windows/C
/dev/sda2             147G   13G  135G   9% /windows/D
/dev/sda8             250G   15G  236G   6% /windows/E

That seem to be roughly correct because since yesterday I’ve been running a program that constantly writes logs and other data files and plots, which might have accumulated a few GB’s. It is also collaborated by the output of


dyn-0a2a1f40:/ # du -sk /home
10548452        /home

I’m not hard-up on space right now but storage has been dear until the recent past. Also out of curiosity, the size of the /home partition is shown as 493 instead of the 500GB allocated while the swap also lists only 4GB instead of 5GB. Below is the output for fdisk -l in case anyone needs it:


dyn-0a2a1f40:/ # fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x219b052d

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1          13      102400    7  HPFS/NTFS
Partition 1 does not end on cylinder boundary.
/dev/sda2              13       19123   153497600    7  HPFS/NTFS
/dev/sda3   *       19124      121601   823154535    f  W95 Ext'd (LBA)
/dev/sda5           19124       19776     5245191   82  Linux swap / Solaris
/dev/sda6           19777       23693    31455232   83  Linux
/dev/sda7           23693       88964   524288000   83  Linux
/dev/sda8           88964      121601   262163456    c  W95 FAT32 (LBA)

Does anyone know what the problem might be? BTW I have Linux 2.6.34-12-desktop x86_64 and KDE 4.4.5 (which I had previously used in 11.2 without any problems) and 4.0GB RAM. Thank you in advance.

the size of the /home partition is shown as 493 instead of the 500GB
This is normal

the swap also lists only 4GB instead of 5GB
Where do you see this? fdisk suggest 5GB

I’ve read your post 4 times and I guess my head must be too stuffed with a cold, but I just can’t figure out what your question is? Are you concerned that your /home partition has 11GB on it?

Thanky both of you for the prompt responses.

Administrator: in the first table, I figured it out to be the sum of the devtmpfs and tmpfs file systems, which are 2.0GB each.
Busy Penguin: The dismay was caused by what appears in the disk usage reported by My computer on the desktop under disk information. It stands at 35.3GB right now while the commands report less than 12GB. Sorry I don’t know how to insert a screen snapshot.

Maybe its an issue of the KDE GUI since the system commands report close values.

It was a problem with the system reserved block count. I set it zero since root has an own partition where both /var and /tmp are located. Thank you again.

It is not a good idea to set the reserved percentage at 0. The 5% (IIRC) that is the default is a bit big on nowadays big partitions, but I advice not to take less then !%. This helps against fragmentation.

To get a correct size reading for / & /home in sysinfo (My Computer) you can use the following menu Run Command:

kdesu kfmclient openProfile webbrowsing sysinfo:/

You can also create a Link To Application on your desktop and use this as the command. You must enter the root password to run it. It will show the correct size of your partitions based on being root and not a normal user. Give it a try.

Thank You,

Thank you both, I will reserve 1% and run the command when i return home in the evening. Suppose a I schedule to run the

tune2fs

utility (with the appropriate options) regularly say at least once every month, would I still be in danger of disk fragmentation?

Nice week.

No need to set the 1% more then once! You run

tune2fs -m 1

and it will be set to 1% until you change again (or the disk fails, wichevers happens first lol! ).

You seem to think that it reorganizes something or so. That is not the case. It simply puts the number on the disk in it’s file system administration. When the kernel (the ext2/3/4 driver) has to allocate new blocks and this would result in having less free space on the disk then the percentage dictates, the I/O is rejected (error).
Except when this is for user root, then the new blocks are allocated until there is nothing more.

When you understand above statement, you will understand:
. that root can still create new data on a fs that is beyond the reserved limit, which is nice on a system disk because then e.g. loging can go on. Of course the system manager must see this happening and do something about it before all is eaten.
. that when e.g. the fs has 5% reserved and is 95% full and you change the reserved to 1%, space will become available immediatly.
. that when the fs has 1% reserved and is 98% full and you change the reserved to 5%, no new blocks will be allocated until, by deletion of files, the usage comes below 95%. Thus nothing is deleted!

Disk fragmentation is allready very low for these type of file systems. But you can imagine that one of the methods to achieve this, is allocating continuous blocks when a larger lump is asked for. These large lumps are easier to be found when there is some leeware on the fs. But with big file systems you have that leeware allways, even at 1% reserved. (I remember that 10% was thought to be a reasonable default with the smaller disks about 20 years ago).

Thank you Global moderator, this was most educative. I have allocated 2 percentages hopefully that will suffice. It actually was some sort of misunderstanding on my part, the man pages aren’t always written in the clearest of ways and I didn’t want to render a big chunk of my storage redundant.

Please keep up the good work, I reckon you save a lot of people untold suffering judging from your numerous and prompt posts.

You are quite welcome.

But remember, I was writing this as another member on these Forums, helping you to understand some background information, not as moderator.