Unable to write to /var/lib/sudo/ts/fleamour: No space left on device.
Snapper may be filling up on a 40GB root system, should not it automate deletion? This used to not happen with 60GB root, since reinstalled & gone with defaults.
Unable to write to /var/lib/sudo/ts/fleamour: No space left on device.
Snapper may be filling up on a 40GB root system, should not it automate deletion? This used to not happen with 60GB root, since reinstalled & gone with defaults.
Hi
Make sure the maintenance service has run, then also check your snapper config and then manually run the maintenance jobs;
systemctl start btrfsmaintenance-refresh.service
systemctl status btrfsmaintenance-refresh.service
vi /etc/snapper/configs/root
(I use)
# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="2-3"
NUMBER_LIMIT_IMPORTANT="2-3"
/etc/cron.daily/suse.de-snapper
/etc/cron.weekly/btrfs-balance
That should clean things up. My Tumbleweed systems don’t stay on for long enough most of the time, so always tend to run manually…
Oh, sorry, did not mean to post this Google said to halve snapshot conf to 5 insteada default 10. Was really worried for a bit, could barely run console!
Even with these measures I have ever decreasing space, root rammed full, cannot zypper dup. Not sure how to force btrfs-balance? The documentation is way over my head, tried;
btrfs fi balance start </dev*/sda3*> -dusage=5
…run as root, too few arguments.
Total devices 1 FS bytes used 39.48GiB
devid 1 size 40.00GiB used 40.00GiB path /dev/sda3
**X250:~ #** systemctl status btrfsmaintenance-refresh.service
● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Sun 2018-01-07 13:55:06 GMT; 29s ago
Process: 25572 ExecStart=/usr/share/btrfsmaintenance/btrfsmaintenance-refresh-cron.sh systemd-timer (code=exited, status=0/SUCCESS)
Main PID: 25572 (code=exited, status=0/SUCCESS)
Jan 07 13:55:05 X250.ThinkPad systemd[1]: Starting Update cron periods from /etc/sysconfig/btrfsmaintenance...
Jan 07 13:55:05 X250.ThinkPad btrfsmaintenance-refresh-cron.sh[25572]: Refresh script btrfs-scrub.sh for uninstall
Jan 07 13:55:05 X250.ThinkPad btrfsmaintenance-refresh-cron.sh[25572]: Refresh script btrfs-defrag.sh for uninstall
Jan 07 13:55:05 X250.ThinkPad btrfsmaintenance-refresh-cron.sh[25572]: Refresh script btrfs-balance.sh for uninstall
Jan 07 13:55:05 X250.ThinkPad btrfsmaintenance-refresh-cron.sh[25572]: Refresh script btrfs-trim.sh for uninstall
Jan 07 13:55:05 X250.ThinkPad btrfsmaintenance-refresh-cron.sh[25572]: Refresh timer btrfs-scrub for monthly
Jan 07 13:55:05 X250.ThinkPad btrfsmaintenance-refresh-cron.sh[25572]: Refresh timer btrfs-defrag for none
Jan 07 13:55:05 X250.ThinkPad btrfsmaintenance-refresh-cron.sh[25572]: Refresh timer btrfs-balance for weekly
Jan 07 13:55:05 X250.ThinkPad btrfsmaintenance-refresh-cron.sh[25572]: Refresh timer btrfs-trim for none
Jan 07 13:55:06 X250.ThinkPad systemd[1]: Started Update cron periods from /etc/sysconfig/btrfsmaintenance.
My computer is soft bricked…
Can you run YaST’s snapper utility to delete older snapshots?
Used snapper list to delete from CLI, but still rammed full.
Something else using space?? huge log,tmp ful etc
To find where the culprit lies, run as root
cd /
du -h --max-depth=1
That will take a while, but will tell you where the huge things are that are causing the disk to be full. My bet still is it’s /.snapshots
root has padlock in dolphin, tmp is highlighted green in terminal;
fleamour@X250:~> cd /
fleamour@X250:/> ls
**bin****boot****dev****etc****home****lib****lib64****mnt****opt****proc****root****run****sbin****selinux****srv****sys**tmp**usr****var**
fleamour@X250:/> cd /tmp
fleamour@X250:/tmp> ls
dropbox-antifreeze-1xP7ev dropbox-antifreeze-kI9sOZ **mozilla_fleamour0****sddm-auth4ec0adfc-897d-4c9c-a3cd-52ac871fa120****sddm-authc21a3e09-3016-4ba4-8727-a1406f8488e8**
dropbox-antifreeze-21GCHq dropbox-antifreeze-Lfy3d0 qtsingleapp-qBitto-1809-3e8-lockfile **sddm-auth5576c739-f47a-41ff-ace0-21d82ec8d465****sddm-authd1a187c0-9921-45e1-9663-58070fba0012**
dropbox-antifreeze-42lRPI dropbox-antifreeze-lTOeWc **runtime-root****sddm-auth586fc416-d445-461a-8995-7603f4cd109b****sddm-authd25ab046-8e60-4c7e-a509-5a6505c0372d**
dropbox-antifreeze-AaFkrS dropbox-antifreeze-LY0Ov3 **sddm-:0-iZJatp****sddm-auth5c5d74a8-2a4d-4847-8c4e-e55f99ce01cd****sddm-authd66e9837-6886-45f2-8706-82a39382c6ed**
dropbox-antifreeze-At0veB dropbox-antifreeze-Lyy51h **sddm-:0-ZbFQEl****sddm-auth5da6272d-fa35-4dcc-95c5-15618ab9b946****sddm-authd7418b64-b6e7-470d-8dda-5fe2b7b7cdc9**
dropbox-antifreeze-B8ce1w dropbox-antifreeze-m7XoMa **sddm-:0-ZobWVg****sddm-auth60ea494f-e012-476a-9f50-f4b394441cc2****sddm-authd88fa723-3d3b-4592-935b-048246dac07b**
dropbox-antifreeze-BJWC8v dropbox-antifreeze-mrOM3b **sddm-auth003be283-59e1-40dc-aa60-fa051e810254****sddm-auth668d985f-4319-4878-82da-01b13898375a****sddm-authdf92568c-d0ca-4199-881b-ff02f8107a7f**
dropbox-antifreeze-CZOFX0 dropbox-antifreeze-nUIJrT **sddm-auth0381c88b-2817-43da-80a2-6786732d7eca****sddm-auth682e4051-4dc0-4853-9018-ea40b7cafbbd****sddm-authe682c749-f22f-4b83-ad6a-681a28034587**
dropbox-antifreeze-DFXZL4 dropbox-antifreeze-oaO6SR **sddm-auth073f1781-ba61-424b-9591-2d2da0f389d8****sddm-auth69f36ff6-0467-44e1-94fd-275d4b699b45****sddm-authe9b69864-198b-4793-ac71-4d07dbedff72**
dropbox-antifreeze-e5ZBX8 dropbox-antifreeze-P6tMbM **sddm-auth097f896d-c711-484a-9525-69c30e514e1d****sddm-auth6e6267a0-ff5d-4cab-83ed-d2f3009c8a50****sddm-authf618345b-3c1c-42e6-b4e8-8b0c0ed04389**
dropbox-antifreeze-f9nmOT dropbox-antifreeze-PKPbS1 **sddm-auth0c01cd1a-f330-4b45-b911-fa197944501f****sddm-auth740040a5-37e6-4dfa-94dd-8fce2b301fbf****sddm-authf8c8c2b1-6fc2-43ca-b2c9-476623e42a0e**
dropbox-antifreeze-fPnm6e dropbox-antifreeze-R874pW **sddm-auth1d8d4df4-e3d3-418d-954a-47e805ae5e72****sddm-auth760fd599-f15c-466a-b6ea-028088d771da****sddm-authf924cfbb-02fd-40eb-8d55-fe9254920fac**
dropbox-antifreeze-fULr5a dropbox-antifreeze-Rhin0R **sddm-auth20630d6d-3f4c-43c4-9136-c769a96d5737****sddm-auth7c386c4b-8e64-47bf-b779-7e34336332d7****systemd-private-d737e9d1250d432583b5fbaa43061e33-ntpd.service-Na0npM**
dropbox-antifreeze-G1E33u dropbox-antifreeze-S8z5al **sddm-auth246f2b73-13cb-4f9c-b419-273ae0590404****sddm-auth7cfdab94-af67-4539-8a86-989772083098****systemd-private-d737e9d1250d432583b5fbaa43061e33-rtkit-daemon.service-XZkzdP**
dropbox-antifreeze-g1PFuf dropbox-antifreeze-SkKl0a **sddm-auth30c6a774-84cf-44f9-8999-1cbd1012e931****sddm-auth853d32da-9119-4a18-bb8e-8eb1f005b5ad** temp.ymp
dropbox-antifreeze-GJ5v8U dropbox-antifreeze-sTxHNa **sddm-auth3488d0e9-f8d9-4907-8214-25b867ecb5e0****sddm-auth90481614-2e4f-418a-9ad9-b1b45399f261** tmpaddon
dropbox-antifreeze-gnPsH8 dropbox-antifreeze-tyxdXF **sddm-auth362e86fc-95c3-4728-a634-2a08d83a1e4b****sddm-auth91ce6363-84db-4603-88a2-61a0b5fe35c5** xauth-1000-_0
dropbox-antifreeze-GxxHOv dropbox-antifreeze-xAiwiA **sddm-auth37f864ec-18f2-4de7-a8e9-e48f3bb071d1****sddm-auth9831f037-9380-4754-a76c-463c2f391b94** xauth.XXXXALdzJ1
dropbox-antifreeze-H0atoE dropbox-antifreeze-xE6Zgx **sddm-auth3bbe437c-7a82-4da8-a0de-a3dba30a94cc****sddm-autha81bcc79-6013-4f9d-a3b1-10b90e1aec89** xauth.XXXXRzqdjv
dropbox-antifreeze-I61YNX dropbox-antifreeze-XFpoxZ **sddm-auth3d619d62-7e43-44fc-849a-70252e1884f4****sddm-autha940faad-9c20-414b-a147-2a020a40711a****YaST2-06342-f9cnVX**
dropbox-antifreeze-JbWwHD dropbox-antifreeze-ywCmiJ **sddm-auth4128a858-3dc0-4105-87fc-e831b991b2d2****sddm-authb757c0e5-afe5-4247-b858-2765056072e9****YaST2-12901-Kf5EeT**
dropbox-antifreeze-jDun1K dumps**sddm-auth41b4194b-617f-4556-92e1-0b8982ff351b****sddm-authc0c3c8db-81a5-4872-a3cb-bffb84729f7f**
dropbox-antifreeze-kcEyJT **kdeconnect****sddm-auth4d071895-0172-444c-a1f2-b3fda077c0c6****sddm-authc1fa91f1-26fd-4745-a15d-f33395f4fe11**
fleamour@X250:/tmp> cd /
fleamour@X250:/> sudo du -h --max-depth=1
26M ./etc
7.4G ./.snapshots
112M ./boot
178M ./opt
0 ./srv
780K ./tmp
6.3G ./usr
29G ./var
92G ./home
8.0K ./dev
du: cannot access './proc/29033/task/29033/fd/4': No such file or directory
du: cannot access './proc/29033/task/29033/fdinfo/4': No such file or directory
du: cannot access './proc/29033/fd/3': No such file or directory
du: cannot access './proc/29033/fdinfo/3': No such file or directory
0 ./proc
0 ./sys
du: cannot access './run/user/1000/gvfs': Permission denied
18M ./run
2.1M ./bin
1.1G ./lib
11M ./lib64
0 ./mnt
19M ./root
12M ./sbin
0 ./selinux
135G .
Hi
So it could be logs and coredumps perhaps?
Did you run the btrfs maintenance routine (btrfs-balance)?
/etc/cron.weekly/btrfs-balance.sh
or the softlink source
/usr/share/btrfsmaintenance/btrfs-balance.sh
coredumpctl list
du -sh /var/log
fleamour@X250:/> /etc/cron.weekly/btrfs-balance.sh
bash: /etc/cron.weekly/btrfs-balance.sh: No such file or directory
fleamour@X250:/> sudo /etc/cron.weekly/btrfs-balance.sh
Swipe your finger across the fingerprint reader
sudo: /etc/cron.weekly/btrfs-balance.sh: command not found
fleamour@X250:/> sudo /usr/share/btrfsmaintenance/btrfs-balance.sh
Before balance of /
Data, single: total=39.25GiB, used=39.23GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=736.00MiB, used=302.75MiB
GlobalReserve, single: total=33.44MiB, used=0.00B
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 43G 43G 19M 100% /
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=1
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=5
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=10
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=20
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=30
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=40
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=50
Done, had to relocate 0 out of 49 chunks
Done, had to relocate 0 out of 49 chunks
ERROR: error during balancing '/': No space left on device
There may be more info in syslog - try dmesg | tail
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=1
SYSTEM (flags 0x2): balancing, usage=1
ERROR: error during balancing '/': No space left on device
There may be more info in syslog - try dmesg | tail
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=5
SYSTEM (flags 0x2): balancing, usage=5
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=10
SYSTEM (flags 0x2): balancing, usage=10
Done, had to relocate 1 out of 48 chunks
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=20
SYSTEM (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 48 chunks
Dumping filters: flags 0x6, state 0x0, force is off
METADATA (flags 0x2): balancing, usage=30
SYSTEM (flags 0x2): balancing, usage=30
Done, had to relocate 1 out of 48 chunks
After balance of /
Data, single: total=39.25GiB, used=39.23GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=512.00MiB, used=302.86MiB
GlobalReserve, single: total=33.52MiB, used=0.00B
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 43G 43G 254M 100% /
fleamour@X250:/> sudo coredumpctl list
No coredumps found.
fleamour@X250:/> sudo du -sh /var/log
46M /var/log
Hi
So snapshots are 7.4 G but looks like something in /var is using up space;
29G ./var
Thanks. The /.snapshot directory’s size looks normal to me, the /var directory doesnt. So
cd /var
su -c 'du -h --max-depth=1'
where you could also use ‘find’ to check /var for huge files or directories.
Thanks for your help. Nuked from orbit, just to be sure. If this happens again then will not zypper dup 5-7 times to test fixed. No space on root & config scripts not ran when aborted = borked system that could not run console. Do not attribute to malice what is stupidity?!? But no trust a home repo any time soon, /var filling up common problem when Googled.
Hi
No issues here with /var (< 1GB in use) ? So are you running some kvm virtual machines and haven’t put /var/lib/libvirt on it’s own partition?
That’d do it then? Got Kali & Win 10 Pro KVMs. I better Google that, unless you could kindly gift me a cut & paste job. This would all be moot if had Clonezilla backup? Can Clonezilla now handle Tumbleweed? When I tried it out yrs back had a problem restoring disk image, it did, but no boot. Failure the best teacher is. ~ Yoda.
Hi
If you recursively copy all the files in /var/lib/libvirt to external storage;
su -
cp -ar /var/lib/libvirt <target>
Add a /var/lib/libvirt partition/mountpoint and cp -ar it all back…
Here is my Tumbleweed system layout;
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 14.9G 0 disk
├─sda1 8:1 0 512M 0 part /boot
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 8G 0 part [SWAP]
sdb 8:16 0 298.1G 0 disk
└─sdb1 8:17 0 298.1G 0 part /var/lib/libvirt
sdc 8:32 0 298.1G 0 disk
├─sdc1 8:33 0 40G 0 part /
├─sdc2 8:34 0 40G 0 part
└─sdc3 8:35 0 218.1G 0 part /data
Thanks, something to practise.