LEAP 15.0 Device Too Full

Hello:

I am receiving this popup from Kdiskfree

Device [/dev/mapper/system-home] on /] is critically full.

and some operations (file conversions, for example), are failing.

I have looked at /dev/mapper/system-home using dolphin, which is as much as I know how to do,
and dolphin shows nothing inside (maybe binary?).

System was installed using LVM on /dev/sda1 only, and up to now has been running fine.

4 installed drives (as copied from the Linux Partitioner):

/dev/sda1    74.51 GiB    Linux LVM
/dev/sdb1    465.76 GiB    Llinux Native     Ext4    1_Docs    /home/chuck/1_docs
/dev/sdc1    465.76 GiB    Llinux Native     Ext4    2_Media    /home/chuck/2_Media
/dev/sdd1    1.82Tib     Ext4    3_Media    /home/chuck/3_Media
/dev/system        74.51 GiB    Linux LVM
/dev/system/home    20.69 GiB    LV    EXT4    /
/dev/system/root    20.00 GiB    LV    EXT4
/dev/system/swap    7.00 GiB    LV    Swap    Swap

The above was copied from the LINUX Partitioner, as I don’t know how
to list all the information on the cli.

/dev/mapper/system-home is currently running at 98% to 99% full,
as indicated by kDiskFree, and I’m afraid this thing will just lock up
and not run at all. Downloading a video over a couple hundred MB results
in a “Disk Full” (or equivalent) message and the download has failed.

Symlinks in the “home” directory lead to other drives with plenty of space,
and downloads appeared to go to those spaces without a problem - but now
it seems that files appear to be completely downloaded to “home” before
transferring to storage locations. I am not clear on this.

My questions are:
how to find what’s occupying the space in /home/chuck,
how to determine what is expanding to fill the space,
how to clear out what can safely be cleared,
and/or how to expand the space available for /home/chuck
with suggestions for cli commands to create readable info for the gurus.

Sorry I am (still) not sufficiently familiar with bash, etc.
I appreciate any help…

Chuck

The KDE graphical disk usage analyzer is called Filelight, it is not standard installed on Tumbleweed so I think also not on Leap but should be easy to install and should give you insight what’s occupying the space in /home/chuck.

On the cli, “df -h” (Disk Free with human readable numbers) should give you a textual overview of the disk use.

… and “du -h” will show you directory and subdirectory usage.

Hi again, and thanks for responding!

Results from df -ah:

Filesystem               Size  Used Avail Use% Mounted on
sysfs                       0     0     0    - /sys
proc                        0     0     0    - /proc
devtmpfs                 7.3G     0  7.3G   0% /dev
securityfs                  0     0     0    - /sys/kernel/security
tmpfs                    7.4G   56M  7.3G   1% /dev/shm
devpts                      0     0     0    - /dev/pts
tmpfs                    7.4G   18M  7.4G   1% /run
tmpfs                    7.4G     0  7.4G   0% /sys/fs/cgroup
cgroup                      0     0     0    - /sys/fs/cgroup/unified
cgroup                      0     0     0    - /sys/fs/cgroup/systemd
pstore                      0     0     0    - /sys/fs/pstore
cgroup                      0     0     0    - /sys/fs/cgroup/devices
cgroup                      0     0     0    - /sys/fs/cgroup/freezer
cgroup                      0     0     0    - /sys/fs/cgroup/cpu,cpuacct
cgroup                      0     0     0    - /sys/fs/cgroup/memory
cgroup                      0     0     0    - /sys/fs/cgroup/rdma
cgroup                      0     0     0    - /sys/fs/cgroup/blkio
cgroup                      0     0     0    - /sys/fs/cgroup/pids
cgroup                      0     0     0    - /sys/fs/cgroup/net_cls,net_prio
cgroup                      0     0     0    - /sys/fs/cgroup/perf_event
cgroup                      0     0     0    - /sys/fs/cgroup/hugetlb
cgroup                      0     0     0    - /sys/fs/cgroup/cpuset
/dev/mapper/system-home   21G   19G  296M  99% /
systemd-1                   0     0     0    - /proc/sys/fs/binfmt_misc
mqueue                      0     0     0    - /dev/mqueue
hugetlbfs                   0     0     0    - /dev/hugepages
debugfs                     0     0     0    - /sys/kernel/debug
/dev/sdc1                459G  138G  320G  31% /home/chuck/2_Media
/dev/sdd1                1.8T  686G  1.2T  38% /home/chuck/3_Media
/dev/sdb1                459G  201G  235G  47% /home/chuck/1_Docs
tmpfs                    1.5G   24K  1.5G   1% /run/user/1000
tracefs                     -     -     -    - /sys/kernel/debug/tracing
fusectl                     0     0     0    - /sys/fs/fuse/connections
gvfsd-fuse                  0     0     0    - /run/user/1000/gvfs

Baffled, here. All the media disks show a maximum of as much as 47% usage,
/dev/mapper/system-home shows 99% - is there a way to know what’s occupying that space?

edit: Result from du -h worked, brought overwhelming response - didn’t know what to do with it…

Take a look this way:

ncdu /home
ncdu /home

No mention of /dev/mapper/system-home

ncdu /dev/mapper/system-home
┌───Error!────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
   │ │
   │ Error: could not open /dev/dm-0                                                                                                                                                                                                                                         │
   │   Error changing directory: Not a directory                                                                                                                                                                                                                             │   │                                                                                                                                                                                                                                            press any key to continue... │
   └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

I want to think “how can I expand /dev/mapper/system-home to hold whatever it contains”, but whatever it is is GROWING, so simply expanding the space doesn’t seem a useful action - and so far, I can’t find out what’s actually inside. I think maybe it’s a fungus … maybe Linux doesn’t get viruses but does get fungi…?

Arrgh! I’m just short of a full reinstall here unless something shows up as a possible fix…

Like du, ncdu works on filesystems and ordinary files, not device files. What I wrote was meant to be used exactly as written, or as its man page directs.

“/dev/mapper/system-home” is the special device. You need to check the mount point.

Looking a few posts earlier in this thread, I see that “/dev/mapper/system-home” is mounted as “/” (your root file system).

So run

du /

or

ncdu /

to see where the space has gone.

You can often free up some space by removing large files in “/tmp” or older files in “/var/log”. Maybe also check whether there are large coredump in “/var/lib/systemd/coredump”.

Not sure how to check the mount point on /dev/mapper/system-home - I did run du / and ncdu / ;
du / gave me tons of filenames I had no idea what to do with, and
ncdu / made more sense to me but again - no idea what to do after seeing it.

I have to leave this for now - back after a few hours.

Thanks for responding…

To find the mount point, I usually look at the output of “df” and check the first column for the device.

I just used the output your provided in post #4 above.

Yes, “du” can be hard when you are looking at the top of the file system. And I’ve never used “ncdu”, so I can’t comment on that.

Typically I would do:


cd /
du -s *

That gives a smaller output, because it just lists totals for subdirectories.

Then I choose one of those subdirectories to look more closely. However, knowing what to expect does partly depend on experience. Expect “usr” to be big, because thats where the software goes. But “var” should be smaller. Here it is around 800M.

ncdu includes navigation, much like mc, so you can see easily examine each directory’s content. The listings are sorted by file size, so the biggest files and directories are always at the top.

Thanks. I should give it a try sometime.

Wow: Small program helps understand the data from “du /” – This helped a lot…

ncdu /
cd /
du -s *

I think I’m getting closer, here…

PEGASUS:/ # du -s *
1628    bin
80444   boot
1024152 chuck
0       dev
19724   etc
1082110608      home
955252  lib
10180   lib64
16      lost+found
4       mnt
216356  opt
du: cannot access 'proc/29817/task/29817/fd/4': No such file or directory
du: cannot access 'proc/29817/task/29817/fdinfo/4': No such file or directory
du: cannot access 'proc/29817/fd/4': No such file or directory
du: cannot access 'proc/29817/fdinfo/4': No such file or directory
0       proc
24480   root
du: cannot access 'run/user/1000/gvfs': Permission denied
17824   run
11464   sbin
4       selinux
20      srv
0       sys
5231700 tmp
6432864 usr
593224  var
PEGASUS:/ # cd tmp
PEGASUS:/tmp # du -s *
4       6OQ8PZ.tmp
4       9K6EQZ.tmp
4       D0LEQZ.tmp
4       IAAIQZ.tmp
644     Luna-II-2.0.4.plasmoid
92      MozillaMailnews
4       TC8EQZ.tmp
36      Teal.tar.gz
4       Temp-e5943ffb-ecd2-42a4-a2ff-ed42a920a62a
4       V4B6PZ.tmp
4       X560SZ.tmp
4       akonadi-chuck.sLEsMm
8       akonadi-chuck.uLperG
60588   calibre-installer-cache
4       closeditems
4       firefox_chuck
36      hsperfdata_chuck
36      hsperfdata_root
4       jna-3506402
4       jniwrapper-3.8.4.Build.3.8.40141bc7e-635c-44b5-ae28-70507b10aeae
4       jniwrapper-3.8.4.Build.3.8.407977ae4-7d6b-4578-b1dd-5e38209595fb
4       jniwrapper-3.8.4.Build.3.8.4094657b7-e2cf-47af-8f9b-e2e7fe041e2c
4       jniwrapper-3.8.4.Build.3.8.40a5a28c4-5928-4660-bb43-036dcd71c239
4       jniwrapper-3.8.4.Build.3.8.4102701fe-2a4e-4819-8578-f8dbcafa26ee
4       jniwrapper-3.8.4.Build.3.8.423fa5cda-8e22-438e-8b39-0472f04fd87c
4       jniwrapper-3.8.4.Build.3.8.4256208fe-4157-4709-b325-a7347647e585
4       jniwrapper-3.8.4.Build.3.8.4293c00ad-aa89-415b-9535-5f2d704873fc

        ----------------------------------------------------------
        List shortened by Crapload of 4-byte iniwrappers for this post
        ----------------------------------------------------------

4       jniwrapper-3.8.4.Build.3.8.4d27eb4e4-07f2-46cb-b0fb-ee11cad4c2a4
4       jniwrapper-3.8.4.Build.3.8.4d3f25920-5219-4088-819e-1b6ab5f30308
4       jniwrapper-3.8.4.Build.3.8.4e32b0814-6314-4586-9c15-77a59aaa32e7
4       jniwrapper-3.8.4.Build.3.8.4ed6e7df6-8c6e-4644-9448-5bfa9ce87ff4
4       jniwrapper-3.8.4.Build.3.8.4f1610176-ceea-4bf0-b90c-06d9b8113422
4       jniwrapper-3.8.4.Build.3.8.4f3048828-93cc-4a81-b4bb-eea8cc97f01e
4       k3b.h32248
8       kde-chuck
4       khtmlcacheFY5419.tmp
8       khtmlcacheGx5419.tmp
8       khtmlcacheJG5419.tmp
4       khtmlcacheJy5419.tmp
4       khtmlcacheVW5419.tmp
4       khtmlcacheWx5419.tmp
8       khtmlcachebY5419.tmp
4       khtmlcachejS5419.tmp
4       khtmlcachemz5419.tmp
4       khtmlcachevt5419.tmp
8       khtmlcachevw5419.tmp
4       khtmlcachexa5419.tmp
4       lu135547tliz2.tmp
1460    lu19799r1kmmz.tmp
32      lu4418zwa0hj.tmp
72      mozilla_chuck0
4       nsemail.eml
48      nsma
252     nsmail-1.jpeg
36      nsmail.eml
252     nsmail.jpeg
192     qdirstat-chuck
16      qdirstat-root
0       qipc_sharedmemory_soliddiskinfomemac5ffa537fd8798875c98e190df289da7e047c05
0       qipc_systemsem_soliddiskinfomemac5ffa537fd8798875c98e190df289da7e047c05
0       qipc_systemsem_soliddiskinfosem92d02dca794587d686de797d715edb3b58944546
0       qtsingleapp-smplay-ca73-3e8-lockfile
4       runtime-root
0       sddm-:0-EIhzAG
0       sddm-:0-WGIjLP
0       sddm-:0-XKvhQX
0       sddm-:0-qKTeRS
0       sddm-:0-sazXNe
0       sddm-:0-xvzgTK
0       sddm-:0-ywFkRI
0       sddm-auth033ecc7b-2cf8-44b2-8903-949863ddce64
0       sddm-auth03b4dbac-d69c-42a1-aa5c-b63034e635b3
0       sddm-auth03b55873-3668-4ba0-aa4d-d088dbd68677
0       sddm-auth0744466b-2093-44af-afed-ac762eb1a31b
0       sddm-auth0dee7165-effc-49b8-935e-58acdd0bf766
0       sddm-auth0e571707-ec53-4aca-845b-e73e624f7df1

        ----------------------------------------------------------
        List shortened by Crapload of 0-byte sddms for this post
        ----------------------------------------------------------

0       sddm-auth15bc38f1-4310-480a-b614-5b3a145bace4
0       sddm-auth175174dc-d220-4369-bcfc-1fdd74c178c5
0       sddm-auth188796cb-3157-49c2-9942-30062776d063
0       sddm-auth1db3cead-3f8f-423a-a2e9-c9f12d4d7f45
0       sddm-auth1e37bb34-1645-4a25-a2fb-34b6f99eb678
0       sddm-auth22d7c0e3-7625-4187-9f32-d475d5422b2a
8       systemd-private-3f36f2ce22164872bb9cca19738dbf67-chronyd.service-rD9mhB
8       systemd-private-3f36f2ce22164872bb9cca19738dbf67-rtkit-daemon.service-u8ObMp
28      tmp-ok7.xpi
516     tmpaddon
504     tmpaddon-748360
516     tmpaddon-8d1b40
3016    tmpaddon-e65837
225140  vdh-10001dO6YhPAr0TPO.tmp
715664  vdh-112619Kp8RWt313oA.tmp
12808   vdh-11261DT0NLY47eL4n.tmp
175308  vdh-11261Hzu34gEw0usj.tmp
64220   vdh-11261QEcFUUDtcfsu.tmp
391216  vdh-11261SbTnmXKEiHC0.mp4
1158068 vdh-11261TyXFOXPBoamm.tmp
74844   vdh-11261XZsYxtzlGZIK.tmp
378252  vdh-11261atu4fmjyLn8k.tmp
50680   vdh-11261dLzkmPuRmim0.tmp
7600    vdh-11261hNvV8q96fUyv.tmp
66904   vdh-11261i5zLP7UR60hD.tmp
893136  vdh-11261q1YVKZZGSDmd.mp4
14772   vdh-11261rsxWcUcdfl1T.tmp
325892  vdh-13741smvZIo1Siq9L.tmp.part
4       vdh-1869IobzWKbj2Rm4.tmp
108052  vdh-187444dfR3rdwETsr.tmp
102992  vdh-18744gyOstW4EemL7.tmp
397356  vdh-19845u68I9yglRSgU.tmp
4       vdh-26189el9N6evtzGT0.tmp
4       vdh-285964qUiMWh4TTTm.tmp
4       vdh-30234sTONHFpo3X5E.tmp
4       vdh-6678hqwoO3Dul9JZ.tmp
4       xauth-0-_0
4       xauth.XXXXMr8ZRv
4       xauth.XXXXWxcMLX
PEGASUS:/tmp # du -s *

Those really big .tmp files at the end are undoubtedly the downloads I was
doing using the “video DownloadHelper” app in Firefox - (vdh Tmp files).
I think maybe repeated borked downloads are leaving the tmp files instead of
completing and deleting, etc. which would account for the expansion of
directory contents.

Log files look pretty reasonable.

So presuming I can actually delete/erase any of these, which are safely
deletable? Are there things in the /tmp directory that should be retained?

Thanks again for looking at this…

Chuck

You can delete anything in “/tmp” that is from earlier than the latest boot time – unless you recognize something that you want.

Yes, your “/var/log” is not too bad. But if space is tight, you could delete files with names ending in “.xz”. Those are compressed logs for archival. But you usually don’t need to go back to those.

Am I seeing right that you have an LVM of only 20GB for / and /home?
Another du example:


sudo du -h --max-depth=1 /

I think you’re right. But, it also looks like he has almost 27GiB of freespace unused on sda that he could add to his LV to expand / and/or /home.

Hi Knurpht:
Yes; Install is on 80GB sda1 with symlinks to 500GB sdb1 & sdc1 and 1T sdd1 storage.

chuck@PEGASUS:~> sudo du -h --max-depth=1 /
[sudo] password for root: 
1.6M    /bin
4.0K    /dev
1001M   /chuck
934M    /lib
du: cannot access '/run/user/1000/gvfs': Permission denied
9.4M    /run
79M     /boot
16K     /lost+found
10M     /lib64
212M    /opt
du: cannot access '/proc/4747/task/4747/fd/3': No such file or directory
du: cannot access '/proc/4747/task/4747/fdinfo/3': No such file or directory
du: cannot access '/proc/4747/fd/4': No such file or directory
du: cannot access '/proc/4747/fdinfo/4': No such file or directory
0       /proc
20M     /etc
6.2G    /usr
68M     /tmp
4.0K    /selinux
4.0K    /mnt
25M     /root
507M    /var
0       /sys
20K     /srv
1.1T    /home
12M     /sbin
1.1T    /
chuck@PEGASUS:~>

Has been running just fine until problems with downloader leaving .tmp files in /.tmp & filling up the disk.
That downloader normally cleans up after itself, but interrupted downloads didn’t clean up.

Guys - I’m up and running again, THANK YOU!