No space left on device

Hi
I have a problem, most things i do. I get a message that pops up, eg insert a usb

[1109.197841] sd 10:0:0:0: [sdc] No caching mode page found
[1109.197841] sd 10:0:0:0: [sdc]  Assuming drive cache: write through

spc
spc: /mnt/vids/Arduino.mp4 tom@10.0.0.49: No space left on device 100% 100mb 13.1mb/s 00:08

[858.822573] systemd-journald[413]: Failed to create new systems journal:No space left on device

I thought this might help is this normal

Leap-Server:~> df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  3.9G     0  3.9G   0% /dev
tmpfs          tmpfs     3.9G     0  3.9G   0% /dev/shm
tmpfs          tmpfs     3.9G  2.0M  3.9G   1% /run
tmpfs          tmpfs     3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda6      btrfs      41G   40G   68K 100% /
/dev/sdb2      ext4      573G   70M  544G   1% /mnt/Videos
/dev/sda6      btrfs      41G   40G   68K 100% /.snapshots
/dev/sda6      btrfs      41G   40G   68K 100% /var/spool
/dev/sda6      btrfs      41G   40G   68K 100% /var/crash
/dev/sda6      btrfs      41G   40G   68K 100% /srv
/dev/sda6      btrfs      41G   40G   68K 100% /usr/local
/dev/sda6      btrfs      41G   40G   68K 100% /var/opt
/dev/sda6      btrfs      41G   40G   68K 100% /tmp
/dev/sda6      btrfs      41G   40G   68K 100% /var/tmp
/dev/sda6      btrfs      41G   40G   68K 100% /boot/grub2/x86_64-efi
/dev/sda6      btrfs      41G   40G   68K 100% /var/lib/mysql
/dev/sda6      btrfs      41G   40G   68K 100% /var/lib/named
/dev/sda6      btrfs      41G   40G   68K 100% /var/lib/pgsql
/dev/sda6      btrfs      41G   40G   68K 100% /var/lib/libvirt/images
/dev/sda6      btrfs      41G   40G   68K 100% /var/lib/mariadb
/dev/sda6      btrfs      41G   40G   68K 100% /var/cache
/dev/sda6      btrfs      41G   40G   68K 100% /var/lib/machines
/dev/sda6      btrfs      41G   40G   68K 100% /boot/grub2/i386-pc
/dev/sda6      btrfs      41G   40G   68K 100% /opt
/dev/sda6      btrfs      41G   40G   68K 100% /var/log
/dev/sda6      btrfs      41G   40G   68K 100% /var/lib/mailman
/dev/sda7      xfs       423G  1.8G  421G   1% /home
tmpfs          tmpfs     795M     0  795M   0% /run/user/0
tmpfs          tmpfs     795M     0  795M   0% /run/user/1000

All of /dev/sda6 are at 100%, i also have to press crtl+c to get back to the prompt
I should mention this is the minimal install i think its called Leap42 Server
Your help is appreciated
Thank you

Hi
Have a look at this thread;
https://forums.opensuse.org/showthread.php/527916-Root-partition-is-full-How-can-I-free-space-and-install-new-software

I see that your root partition ( / on /dev/sda6 ) is 40 GB .

I had same problem. ( https://forums.opensuse.org/showthread.php/523773-37GB-2GB-free-no-space-left-on-device-snapper-fills-partition-how-to-configure-stop-snapper )

File system btrfs is not able to handle little disk space like 40 GB .

My opinion is clear : re-install your system and use ext4 instead of btrfs .

I thought that my 42.2 would resist but it failed a few weeks ago and i lost some important personal datas. And i lost two days to understand, retrieve datas, and re-install .

So, if little root partition, forget about btrfs and use ext4 since btrfs is not able to use 40GB .

Other solution : use btrfs but reinsall with only one very large partition on your whole disk = no separate /home .

id have to disagree on the 40gb. I have 26gb total with 8gb free, a few extra gb space would be more optimal.

  1. sudo btrfs filesystem usage -h / (to get better idea of usage)
  2. sudo snapper list (to get snapshots)
  3. find out whats taking so much space
  4. out of disk space is not good on btrfs (sort the problem before this occurs) [but can be recovered]

Thank you , Christophe_deR, ndc33 I looked at the link posted above , in the end i only gained 416kb. ndc33, I think I will give your suggestion a try first. I don’t want to reinstall because I have an encrypted partition on another disk, on my system. Thank you, I probably will need more advice PS thank you malcomlewis

Hi
So what is the output from;


snapper list
btrfs fi usage /

snapper list
Type   | # | Pre # | Date                     | User | Cleanup | Description           | Userdata     
-------+---+-------+--------------------------+------+---------+-----------------------+--------------
single | 0 |       |                                           | root |                   | current                  |              
single | 1 |       | Mon Oct  2 12:58:14 2017    | root |                   | first root filesystem |              
single | 2 |       | Mon Oct  2 13:04:27 2017    | root | number       | after installation      | important=yes
pre    | 3 |        | Sun Oct  8 15:17:53 2017    | root | number       | yast firewall            |              
post   | 4 | 3     | Sun Oct  8 15:22:32 2017    | root | number  |                       |   

Leap-Server:~>btrfs fi usage /
Overall:
    Device size:          40.00GiB
    Device allocated:          40.00GiB
    Device unallocated:           1.00MiB
    Device missing:             0.00B
    Used:              39.65GiB
    Free (estimated):         252.00KiB    (min: 252.00KiB)
    Data ratio:                  1.00
    Metadata ratio:              2.00
    Global reserve:          48.00MiB    (used: 0.00B)

Data,single: Size:39.44GiB, Used:39.44GiB
   /dev/sda6      39.44GiB

Metadata,DUP: Size:256.00MiB, Used:110.20MiB
   /dev/sda6     512.00MiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/sda6      64.00MiB

Unallocated:
   /dev/sda6       1.00MiB

Well does not look like Snapper is the culprit. Maybe a runaway log file???

On Tue 07 Nov 2017 01:16:01 AM CST, gogalthorp wrote:

Well does not look like Snapper is the culprit. Maybe a runaway log
file???

Hi
Or core dumps…

@OP can you check coredumps, clean out the logs and rebalance;


coredumpctl list
journalctl --vacuum-time=2d
/etc/cron.weekly/btrfs-balance


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE Leap 42.2|GNOME 3.20.2|4.4.90-18.32-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Hi,
sustemctl failed to start at boot, don’t know why just happened when i booted today

coredumpctl returned : No coredumps found

Before balance of /
Data, single: total=39.44GiB, used=39.41GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=256.00MiB, used=116.28MiB
GlobalReserve, single: total=48.00MiB, used=0.00B
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda6        43G   43G   27M 100% /
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=1
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=5
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=10
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=20
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=30
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=40
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=50
Done, had to relocate 0 out of 49 chunks
Done, had to relocate 0 out of 49 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=1
  SYSTEM (flags 0x2): balancing, usage=1
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=5
  SYSTEM (flags 0x2): balancing, usage=5
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=10
  SYSTEM (flags 0x2): balancing, usage=10
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=20
  SYSTEM (flags 0x2): balancing, usage=20
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=30
  SYSTEM (flags 0x2): balancing, usage=30
After balance of /
Data, single: total=39.44GiB, used=39.41GiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, DUP: total=256.00MiB, used=116.28MiB
GlobalReserve, single: total=48.00MiB, used=0.00B
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda6        43G   43G   27M 100% /

This was suggested at the end of one of the commands, I hope it is usefull
dmesg | tail

  544.476414] BTRFS info (device sda6): 1 enospc errors during balance
  544.624226] BTRFS info (device sda6): 2 enospc errors during balance
  544.818737] BTRFS info (device sda6): 2 enospc errors during balance
  545.027225] BTRFS info (device sda6): 2 enospc errors during balance
  545.207055] BTRFS info (device sda6): 2 enospc errors during balance
  575.462068] BTRFS info (device sda6): 1 enospc errors during balance
  575.676359] BTRFS info (device sda6): 2 enospc errors during balance
  575.861161] BTRFS info (device sda6): 2 enospc errors during balance
  576.058713] BTRFS info (device sda6): 2 enospc errors during balance
  576.211070] BTRFS info (device sda6): 2 enospc errors during balance

Thank you

All allocated space is used up and nothing more can be allocated. What do you expect to balance here? You need to find out what consumes space.


1 enospc errors during balance

Is probably due to being out of space. BTRFS uses btrees which can get out of balance but it does not use up space it just means one side of the tree has gotten much longer and thus slower. Balancing adjusts the start point of the tree so each branch is approximately the same.

Something is using up space and we have determined it is NOT snapper. use df to see what directory is over large (man df for details)

suggestion: things get ugly once all space has gone so perhaps look to free what you can, the 2 snapshots would be a start. I would then not delay in tracking down whatever has stolen all the space.

arvidjaar
That was suggested by a previous post by malcomlewis, sorry if I miss understood his post.

Well, I have been through the 40GIG and there is nothing taking up the space except the /.snapshot directory. I used this

find / -type f -size +20M -exec ls -lh {}\; awk '{print $nf":$5}' 

I have also deleted 2 of the snapshots re previous post, the pre and post, 3 and 4 I still have no space !!!

PS What do the 5 yellow stars mean
I think I am going start again, Is it possible to keep my encrypted partition that is on another hhd

Hi
If you delete them you need to run the balance cron job again.

quo

I usually start with “du -sh /*” and drill down, but can be challenging with btrfs.

Please show full output of

btrfs su li -aqu /
btrfs qgroup show -pc /

Hi

qgroupid         rfer         excl parent  child 
--------         ----         ---- ------  ----- 
0/5          16.00KiB     16.00KiB ---     ---  
0/257        16.00KiB     16.00KiB ---     ---  
0/258        16.00KiB     16.00KiB ---     ---  
0/259        39.44GiB     37.99GiB ---     ---  
0/260         2.38MiB      2.38MiB ---     ---  
0/261        16.00KiB     16.00KiB ---     ---  
0/262        16.00KiB     16.00KiB ---     ---  
0/263        16.00KiB     16.00KiB ---     ---  
0/264        16.00KiB     16.00KiB ---     ---  
0/265        16.00KiB     16.00KiB ---     ---  
0/266         9.51MiB      9.51MiB ---     ---  
0/267        16.00KiB     16.00KiB ---     ---  
0/268        16.00KiB     16.00KiB ---     ---  
0/269        16.00KiB     16.00KiB 255/269 ---  
0/270        16.00KiB     16.00KiB ---     ---  
0/271        16.00KiB     16.00KiB ---     ---  
0/272        16.00KiB     16.00KiB ---     ---  
0/273        16.00KiB     16.00KiB ---     ---  
0/274        16.00KiB     16.00KiB ---     ---  
0/275        10.30MiB     10.30MiB ---     ---  
0/276        16.00KiB     16.00KiB ---     ---  
0/277        48.00KiB     48.00KiB ---     ---  
0/278        16.00KiB     16.00KiB ---     ---  
0/281         1.48GiB     27.62MiB 1/0     ---  
1/0           1.48GiB     27.62MiB ---     0/281
255/269      16.00KiB     16.00KiB ---     0/269
ID 257 gen 103 top level 5 parent_uuid - uuid 80213dfe-6148-6148-ad63-dfe3e96c27aa path <FS_TREE>/@
ID 258 gen 40926 top level 257 parent_uuid - uuid 54184e3e-b22e-784b-b377-106f8b2db2da path <FS_TREE>/@/.snapshots
ID 259 gen 40957 top level 258 parent_uuid 80213dfe-6148-6148-ad63-dfe3e96c27aa uuid 06255e13-e7f7-e948-8dc5-ee67c625590e path <FS_TREE>/@/.snapshots/1/snapshot
ID 260 gen 40920 top level 257 parent_uuid - uuid 24043586-7147-e444-9a6d-2124441bf0f0 path <FS_TREE>/@/boot/grub2/i386-pc
ID 261 gen 40920 top level 257 parent_uuid - uuid e672ecf9-2ed6-1547-a851-160e6c54a2de path <FS_TREE>/@/boot/grub2/x86_64-efi
ID 262 gen 40918 top level 257 parent_uuid - uuid 7a4e6381-15f9-9e4c-bfdd-1e824e47b8a3 path <FS_TREE>/@/opt
ID 263 gen 40920 top level 257 parent_uuid - uuid b5ca7a45-1adb-004a-bc0c-fdb29f1437ee path <FS_TREE>/@/srv
ID 264 gen 40957 top level 257 parent_uuid - uuid 4697f642-5372-1a48-aab2-935f6fcc795a path <FS_TREE>/@/tmp
ID 265 gen 40920 top level 257 parent_uuid - uuid f4e47b7a-1894-8e4e-89da-9f457b356878 path <FS_TREE>/@/usr/local
ID 266 gen 40921 top level 257 parent_uuid - uuid f64caa20-c9c8-8c4f-a0d2-0ab82e06f35d path <FS_TREE>/@/var/cache
ID 267 gen 40921 top level 257 parent_uuid - uuid e3374831-aa82-8243-9627-5e6d77ed5427 path <FS_TREE>/@/var/crash
ID 268 gen 40921 top level 257 parent_uuid - uuid d9fdfbb7-fa60-264a-85f5-caf384a3813e path <FS_TREE>/@/var/lib/libvirt/images
ID 269 gen 40921 top level 257 parent_uuid - uuid b921b2b1-6613-fb47-9254-20b58adc59dd path <FS_TREE>/@/var/lib/machines
ID 270 gen 40921 top level 257 parent_uuid - uuid 04f9713a-a9a8-9f4d-8c18-b5c9e3e5961c path <FS_TREE>/@/var/lib/mailman
ID 271 gen 40921 top level 257 parent_uuid - uuid 67b92cd6-9909-eb46-abd2-d7e3c8e5dac0 path <FS_TREE>/@/var/lib/mariadb
ID 272 gen 40921 top level 257 parent_uuid - uuid 9d9d32d5-f8a0-c543-b8fb-8f193edc789d path <FS_TREE>/@/var/lib/mysql
ID 273 gen 40921 top level 257 parent_uuid - uuid 07599647-8e67-1549-a27a-5360a74e115c path <FS_TREE>/@/var/lib/named
ID 274 gen 40921 top level 257 parent_uuid - uuid 6546814e-d74b-7a42-8cb8-5fd4be797f97 path <FS_TREE>/@/var/lib/pgsql
ID 275 gen 40959 top level 257 parent_uuid - uuid a20b3cab-12e6-dc4c-8861-8159c3eef005 path <FS_TREE>/@/var/log
ID 276 gen 40921 top level 257 parent_uuid - uuid 6e993f16-9d3f-e84a-a57f-e7b8d26d194b path <FS_TREE>/@/var/opt
ID 277 gen 40960 top level 257 parent_uuid - uuid 982c5f80-5069-3b46-9094-b5b2de61a865 path <FS_TREE>/@/var/spool
ID 278 gen 40921 top level 257 parent_uuid - uuid a3e2157d-a99c-7949-ab45-b2a0e630ee96 path <FS_TREE>/@/var/tmp
ID 281 gen 548 top level 258 parent_uuid 06255e13-e7f7-e948-8dc5-ee67c625590e uuid f1f4bcc1-c436-3041-81e2-82c581c8f89b path <FS_TREE>/@/.snapshots/2/snapshot

Subvolume 259 is your actual root filesystem. As was already said multiple times, it is full - it consumes all 40G of your storage. There is one snapshot, but it consumes negligible amount of unique data (27MB), so removing it is not going to change anything.

As you already have been told - you need to find out what consumes space on root filesystem. No amount of voodoo dances is going to fix it.

there is nothing taking up the space except the /.snapshot directory

Yes, this is exactly what is​ taking space.

I have found what is taking all the space!

It turns out that an HDD that i have installed is not able to be written too. The data that I have tried to write to that hdd is being written to the mount point
in this case /mnt/Videos
Now I just have to figure out what is causing that problem

Thank you to all the OPs that help me, it is very much appreciated.

Le 13/11/2017 à 02:46, Subzero01 a écrit :
>
> I have -found- what is taking all the space!
>
> It turns out that an HDD that i have installed is not able to be
> written too. The data that I have tried to write to that hdd is being
> written to the mount point

classical problem, usually triggered by a script. You have to find some
way to test if the disk is mounted (I don’t know how)

good luck :slight_smile:
jdd