I have no idea what information I should provide so I’ll “panic” give some general info
The root partition is using btrfs.
inxi -GSaz
$ inxi -GSaz
System:
Kernel: 6.9.9-1-default arch: x86_64 bits: 64 compiler: gcc v: 13.3.0
clocksource: tsc avail: hpet,acpi_pm
parameters: BOOT_IMAGE=/boot/vmlinuz-6.9.9-1-default
root=UUID=03fd253f-1f76-4d63-888a-5b3eaaa25ad1 splash=silent quiet
security=apparmor nvidia_drm.modeset=1 fbdev=1 mitigations=auto
Desktop: KDE Plasma v: 6.1.3 tk: Qt v: N/A info: frameworks v: 6.4.0
wm: kwin_wayland tools: avail: xscreensaver vt: 3 dm: SDDM Distro: openSUSE
Tumbleweed 20240726
Graphics:
Device-1: NVIDIA GA106 [GeForce RTX 3060 Lite Hash Rate] vendor: ASUSTeK
driver: nvidia v: 555.58 alternate: nouveau,nvidia_drm non-free: 550.xx+
status: current (as of 2024-06; EOL~2026-12-xx) arch: Ampere code: GAxxx
process: TSMC n7 (7nm) built: 2020-2023 pcie: gen: 4 speed: 16 GT/s
lanes: 16 ports: active: none off: DP-1 empty: DP-2,DP-3,HDMI-A-1
bus-ID: 13:00.0 chip-ID: 10de:2504 class-ID: 0300
Display: wayland server: X.org v: 1.21.1.12 with: Xwayland v: 24.1.1
compositor: kwin_wayland driver: X: loaded: nvidia gpu: nvidia display-ID: 0
Monitor-1: DP-1 res: 1920x1080 size: N/A modes: N/A
API: EGL v: 1.5 hw: drv: nvidia platforms: device: 0 drv: nvidia device: 2
drv: swrast gbm: drv: nvidia surfaceless: drv: nvidia wayland: drv: nvidia
x11: drv: zink inactive: device-1
API: OpenGL v: 4.6.0 compat-v: 4.5 vendor: nvidia mesa v: 555.58
glx-v: 1.4 direct-render: yes renderer: NVIDIA GeForce RTX 3060/PCIe/SSE2
memory: 11.72 GiB display-ID: :1.0
API: Vulkan v: 1.3.290 layers: 10 device: 0 type: discrete-gpu
name: NVIDIA GeForce RTX 3060 driver: N/A device-ID: 10de:2504
surfaces: xcb,xlib,wayland
df -h
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p2 100G 99G 3.0M 100% /
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 16G 82M 16G 1% /dev/shm
efivarfs 128K 56K 68K 45% /sys/firmware/efi/efivars
tmpfs 6.3G 2.4M 6.3G 1% /run
/dev/nvme0n1p2 100G 99G 3.0M 100% /.snapshots
/dev/nvme0n1p2 100G 99G 3.0M 100% /srv
/dev/nvme0n1p2 100G 99G 3.0M 100% /boot/grub2/i386-pc
/dev/nvme0n1p2 100G 99G 3.0M 100% /boot/grub2/x86_64-efi
/dev/nvme0n1p2 100G 99G 3.0M 100% /root
/dev/nvme0n1p2 100G 99G 3.0M 100% /opt
/dev/nvme0n1p2 100G 99G 3.0M 100% /usr/local
/dev/nvme0n1p2 100G 99G 3.0M 100% /var
tmpfs 16G 4.7M 16G 1% /tmp
/dev/nvme0n1p1 511M 5.9M 506M 2% /boot/efi
/dev/nvme0n1p3 806G 601G 164G 79% /home
/dev/sda1 916G 424G 446G 49% /mnt/NewVolume
tmpfs 3.2G 14M 3.2G 1% /run/user/1000
du -hcxz /*
# du -hcsx /*
4.0K /bin
446M /boot
0 /dev
21M /etc
601G /home
4.0K /lib
4.0K /lib64
0 /mnt
2.0G /opt
du: cannot access '/proc/2418/task/2480/fdinfo/685': No such file or directory
du: cannot access '/proc/2418/task/2480/fdinfo/718': No such file or directory
du: cannot access '/proc/2418/task/2499/fd/723': No such file or directory
du: cannot access '/proc/2418/task/3228/fdinfo/719': No such file or directory
du: cannot access '/proc/2930/task/2975/fdinfo/86': No such file or directory
du: cannot access '/proc/2930/task/2975/fdinfo/88': No such file or directory
du: cannot access '/proc/2930/task/3130/fd/70': No such file or directory
du: cannot access '/proc/2930/task/3130/fd/86': No such file or directory
du: cannot access '/proc/2930/task/3130/fd/88': No such file or directory
du: cannot access '/proc/2930/task/3223/fdinfo/70': No such file or directory
du: cannot access '/proc/2930/task/3223/fdinfo/86': No such file or directory
du: cannot access '/proc/2930/task/3223/fdinfo/88': No such file or directory
du: cannot read directory '/proc/4092/task/4092/net': Invalid argument
du: cannot read directory '/proc/4092/net': Invalid argument
du: cannot read directory '/proc/16629/task/16629/net': Invalid argument
du: cannot read directory '/proc/16629/net': Invalid argument
du: cannot access '/proc/115472/task/115472/fd/4': No such file or directory
du: cannot access '/proc/115472/task/115472/fdinfo/4': No such file or directory
du: cannot access '/proc/115472/fd/3': No such file or directory
du: cannot access '/proc/115472/fdinfo/3': No such file or directory
0 /proc
45M /root
2.2M /run
4.0K /sbin
0 /srv
0 /sys
8.0K /tmp
16G /usr
64G /var
683G total
snapper list
# snapper list
# │ Type │ Pre # │ Date │ User │ Cleanup │ Description │ Userdata
─────┼────────┼───────┼──────────────────────────────────┼──────┼─────────┼───────────────────────┼──────────────
0 │ single │ │ │ root │ │ current │
159* │ single │ │ Sun 26 May 2024 02:31:42 AM EEST │ root │ │ writable copy of #157 │
430 │ pre │ │ Fri 19 Jul 2024 06:26:58 PM EEST │ root │ number │ zypp(zypper) │ important=yes
431 │ post │ 430 │ Fri 19 Jul 2024 06:28:45 PM EEST │ root │ number │ │ important=yes
516 │ pre │ │ Tue 30 Jul 2024 08:08:32 AM EEST │ root │ number │ yast firewall │
517 │ post │ 516 │ Tue 30 Jul 2024 08:08:46 AM EEST │ root │ number │ │
518 │ pre │ │ Tue 30 Jul 2024 09:54:22 PM EEST │ root │ number │ yast snapper │
519 │ post │ 518 │ Tue 30 Jul 2024 09:54:29 PM EEST │ root │ number │ │
btrfs filesystem usage /
$ sudo btrfs filesystem usage /
Overall:
Device size: 100.00GiB
Device allocated: 100.00GiB
Device unallocated: 1.00MiB
Device missing: 0.00B
Device slack: 0.00B
Used: 97.92GiB
Free (estimated): 2.90MiB (min: 2.90MiB)
Free (statfs, df): 2.90MiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 95.86MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:95.94GiB, Used:95.93GiB (100.00%)
/dev/nvme0n1p2 95.94GiB
Metadata,DUP: Size:2.00GiB, Used:1015.19MiB (49.57%)
/dev/nvme0n1p2 4.00GiB
System,DUP: Size:32.00MiB, Used:16.00KiB (0.05%)
/dev/nvme0n1p2 64.00MiB
Unallocated:
/dev/nvme0n1p2 1.00MiB
Tried cleaning snapper, got this error
$ sudo snapper cleanup number
quota not working (qgroup not set)
I might be inclined to run zypper cc
as root as a start.
Do you have fstrim
set to run? (On my TW system, there’s a service for it that’s running - you can check systemctl status fstrim
to see if it’s running for you.)
You can also remove specific snapshots using snapper rm <num-or-range>
. In your case, I’d probably delete snapshots 430-431, but that’s not likely to free up a ton of space - but maybe enough to give you a little breathing space.
1 Like
$ systemctl status fstrim
○ fstrim.service - Discard unused blocks on filesystems from /etc/fstab
Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static)
Active: inactive (dead)
TriggeredBy: ● fstrim.timer
Docs: man:fstrim(8)
$ systemctl start fstrim
fstrim was not enabled. I don’t know how to use it.
$ sudo fstrim -v /
/: 4.5 GiB (4838318080 bytes) trimmed
$ sudo zypper cc
All repositories have been cleaned up.
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/nvme0n1p2 104857600 100127464 2246600 98% /
devtmpfs 4096 8 4088 1% /dev
tmpfs 16389576 83208 16306368 1% /dev/shm
efivarfs 128 56 68 45% /sys/firmware/efi/efivars
tmpfs 6555832 2388 6553444 1% /run
/dev/nvme0n1p2 104857600 100127464 2246600 98% /.snapshots
/dev/nvme0n1p2 104857600 100127464 2246600 98% /srv
/dev/nvme0n1p2 104857600 100127464 2246600 98% /boot/grub2/i386-pc
/dev/nvme0n1p2 104857600 100127464 2246600 98% /boot/grub2/x86_64-efi
/dev/nvme0n1p2 104857600 100127464 2246600 98% /root
/dev/nvme0n1p2 104857600 100127464 2246600 98% /opt
/dev/nvme0n1p2 104857600 100127464 2246600 98% /usr/local
/dev/nvme0n1p2 104857600 100127464 2246600 98% /var
tmpfs 16389576 4756 16384820 1% /tmp
/dev/nvme0n1p1 523248 5976 517272 2% /boot/efi
/dev/nvme0n1p3 844195144 630195360 171043600 79% /home
/dev/sda1 960302096 444248312 467199360 49% /mnt/NewVolume
tmpfs 3277912 13452 3264460 1% /run/user/1000
You may notice the following text –
TriggeredBy: ● fstrim.timer
Therefore, please type –
# systemctl status fstrim.timer
You may notice something similar to –
Trigger: Thu 2024-08-01 00:36:51 CEST; 1 day 1h left
Triggers: ● fstrim.service
The systemd “fstrim” service is triggered by the “fstrim” timer which has a default period between triggers which may be changed as described by the systemd man pages.
> rpm --query --whatprovides /etc/systemd/system/
systemd-254.13-150600.4.5.1.x86_64
>
> rpm --query --whatprovides /etc/systemd/system/fstrim.timer
file /etc/systemd/system/fstrim.timer is not owned by any package
>
> rpm --query --whatprovides /usr/lib/systemd/system/fstrim.*
util-linux-systemd-2.39.3-150600.4.6.2.x86_64
util-linux-systemd-2.39.3-150600.4.6.2.x86_64
>
Having written all that, if, your system partition uses a Btrfs file system, you should check the number of snapshots “snapper” has dropped into that file system.
- Please investigate the file space allocated to Btrfs snapshots.
2 Likes
fstrim looks to be loaded; it doesn’t run constantly, so an inactive state isn’t unusual.
With btrfs, df
doesn’t really give a good accounting of available disk space.
What do you see with btrfs filesystem df /
?
Your output from du
also may be providing information about multiple filesystems, so it’s difficult to see, for example, if /home
is part of the root partition or not. Given that /home
has 600+ GB of data in it, and your subject indicates root has 100 GB of space allocated, we’re not seeing only root.
Try: sudo du -hcxd 1 /
(or run it without sudo, but as root). This will give you the output for just what’s in the root filesystem (-x) and a depth of 1 (-d 1), and make it easier to see what is actually being used in the root partition.
1 Like
# systemctl status fstrim.timer
● fstrim.timer - Discard unused filesystem blocks once a week
Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; preset: enabled)
Active: active (waiting) since Tue 2024-07-30 21:21:36 EEST; 2h 45min ago
Trigger: Mon 2024-08-05 00:19:09 EEST; 5 days left
Triggers: ● fstrim.service
Docs: man:fstrim
Jul 30 21:21:36 localhost.localdomain systemd[1]: Started Discard unused filesystem blocks once a week.
@hendersj
# btrfs filesystem df /
Data, single: total=95.94GiB, used=93.80GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=2.00GiB, used=820.83MiB
GlobalReserve, single: total=95.86MiB, used=0.00B
# du -hcxd 1 /
21M /etc
16G /usr
446M /boot
0 /mnt
17G /
17G total
Something’s not adding up here, and I’m not sure why.
I’m surprised that snapper list
doesn’t show you the size of each snapshot. When I run it on my TW installation, I have a column that also shows “Used Space” between the “User” and “Cleanup” columns.
From what I’ve been able to read, it sounds like qgroups aren’t enabled in your installation, as that is apparently needed for the used space measurements. Based on what you’re seeing, it seems like maybe you have a snapshot pair for a larger update (probably the pair for 430/431). You could remove that pair, but it still seems strange that that would account for 80+ GB of storage.
1 Like
Deleting the 430-431 pair has not done anything significant to free up memory
$ sudo snapper list
# │ Type │ Pre # │ Date │ User │ Cleanup │ Description │ Userdata
─────┼────────┼───────┼──────────────────────────────────┼──────┼─────────┼───────────────────────┼─────────────
0 │ single │ │ │ root │ │ current │
159* │ single │ │ Sun 26 May 2024 02:31:42 AM EEST │ root │ │ writable copy of #157 │
432 │ pre │ │ Tue 30 Jul 2024 10:40:47 PM EEST │ root │ number │ zypp(zypper) │ important=no
433 │ post │ 432 │ Tue 30 Jul 2024 10:40:54 PM EEST │ root │ number │ │ important=no
434 │ pre │ │ Wed 31 Jul 2024 01:06:46 AM EEST │ root │ number │ yast snapper │
435 │ post │ 434 │ Wed 31 Jul 2024 01:07:03 AM EEST │ root │ number │ │
$ sudo btrfs filesystem df /
Data, single: total=95.94GiB, used=90.35GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=2.00GiB, used=577.83MiB
GlobalReserve, single: total=95.86MiB, used=0.00B
@ionlypostwhenineedhelp it won’t until you run the btrfs maintenance tools…
See SDB:BTRFS - openSUSE Wiki
1 Like
So in my case, I would have to run
sudo btrfs fi balance start / -dusage=5
I’ve never ran this command before, and I didn’t really understand the explanation that the documents provide for this command.
$ sudo btrfs fi balance start / -dusage=5
Done, had to relocate 0 out of 103 chunks
$ sudo btrfs filesystem df /
Data, single: total=95.94GiB, used=90.29GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=2.00GiB, used=573.86MiB
GlobalReserve, single: total=95.86MiB, used=0.00B
@ionlypostwhenineedhelp then;
btrfs balance start --full-balance --bg /
#run this command to check status
btrfs balance status -v /
1 Like
$ sudo btrfs balance start --full-balance --bg /
$ sudo btrfs balance status -v /
Balance on '/' is running
9 out of about 103 chunks balanced (10 considered), 91% left
Dumping filters: flags 0x7, state 0x1, force is off
DATA (flags 0x0): balancing
METADATA (flags 0x0): balancing
SYSTEM (flags 0x0): balancing
#skipping to the end of the balancing process
$ sudo btrfs balance status -v /
Balance on '/' is running
101 out of about 103 chunks balanced (102 considered), 2% left
Dumping filters: flags 0x7, state 0x1, force is off
DATA (flags 0x0): balancing
METADATA (flags 0x0): balancing
SYSTEM (flags 0x0): balancing
$ sudo btrfs balance status -v /
No balance found on '/'
$ sudo btrfs filesystem df /
Data, single: total=91.96GiB, used=90.29GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=1.00GiB, used=569.88MiB
GlobalReserve, single: total=88.12MiB, used=0.00B
@ionlypostwhenineedhelp what about the output of btrfs qgroup show /
1 Like
$ sudo btrfs qgroup show /
ERROR: can't list qgroups: quotas not enabled
@ionlypostwhenineedhelp then it could be journals or coredumps, so what does the following show;
journalctl --disk-usage
#Just look at the space they are using in the output column
coredumpctl list
1 Like
$ sudo journalctl --disk-usage
Archived and active journals take up 217.1M in the file system.
$ sudo coredumpctl list
REFILE EXE SIZE
esent /usr/bin/plasma-discover 109.0M
esent /app/bin/vesktop/vesktop.bin 7.3M
esent /usr/bin/plasmashell 64.4M
esent /app/lib/librewolf/librewolf 45.4M
esent /home/bob/Downloads/mercury_123.0.1_linux_AVX/mercury/mercury 56.7M
esent /home/bob/Downloads/mercury_123.0.1_linux_AVX/mercury/mercury 38.8M
esent /home/bob/Downloads/mercury_123.0.1_linux_AVX/mercury/mercury 56.6M
esent /home/bob/Downloads/mercury_123.0.1_linux_AVX/mercury/mercury 31.6M
esent /home/bob/Downloads/mercury_123.0.1_linux_AVX/mercury/mercury 51.3M
esent /home/bob/Downloads/stegdetect/Stegdetect/build/conftest 24.1K
esent /home/bob/.local/share/Steam/ubuntu12_64/gldriverquery 1.2M
esent /home/bob/.local/share/Steam/ubuntu12_32/gldriverquery 12.4M