Hello,
Tonight my factory install said
there was 3164 updates so I installed
it; I have the btrfs filesystem, my question
is after the “upgrade” my hard disk is filled
up way full, so my question is will it be deleted
automatically it went from 45% full to 67% full
with only 9gig left, I guess what I’m saying is
how can i keep updating if It runs me out of disk Space?
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 30G 19G 9.5G 67% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 84K 2.0G 1% /dev/shm
tmpfs 2.0G 2.2M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 2.0G 2.2M 2.0G 1% /var/lock
tmpfs 2.0G 2.2M 2.0G 1% /var/run
/dev/sda2 30G 19G 9.5G 67% /.snapshots
/dev/sda2 30G 19G 9.5G 67% /var/tmp
/dev/sda2 30G 19G 9.5G 67% /var/spool
/dev/sda2 30G 19G 9.5G 67% /var/opt
/dev/sda2 30G 19G 9.5G 67% /var/log
/dev/sda2 30G 19G 9.5G 67% /var/lib/pgqsl
/dev/sda2 30G 19G 9.5G 67% /var/lib/named
/dev/sda2 30G 19G 9.5G 67% /var/lib/mailman
/dev/sda2 30G 19G 9.5G 67% /var/crash
/dev/sda2 30G 19G 9.5G 67% /usr/local
/dev/sda2 30G 19G 9.5G 67% /tmp
/dev/sda2 30G 19G 9.5G 67% /srv
/dev/sda2 30G 19G 9.5G 67% /opt
/dev/sda2 30G 19G 9.5G 67% /boot/grub2/x86_64-efi
/dev/sda2 30G 19G 9.5G 67% /boot/grub2/i386-pc
/dev/sda3 44G 1.8G 42G 5% /home
On Wed 23 Jul 2014 03:16:01 AM CDT, KEA0463 wrote:
Hello,
Tonight my factory install said
there was 3164 updates so I installed
it; I have the btrfs filesystem, my question
is after the “upgrade” my hard disk is filled
up way full, so my question is will it be deleted
automatically it went from 45% full to 67% full
with only 9gig left, I guess what I’m saying is
how can i keep updating if It runs me out of disk Space?
Code:
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 30G 19G 9.5G 67% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 84K 2.0G 1% /dev/shm
tmpfs 2.0G 2.2M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
tmpfs 2.0G 2.2M 2.0G 1% /var/lock
tmpfs 2.0G 2.2M 2.0G 1% /var/run
/dev/sda2 30G 19G 9.5G 67% /.snapshots
/dev/sda2 30G 19G 9.5G 67% /var/tmp
/dev/sda2 30G 19G 9.5G 67% /var/spool
/dev/sda2 30G 19G 9.5G 67% /var/opt
/dev/sda2 30G 19G 9.5G 67% /var/log
/dev/sda2 30G 19G 9.5G 67% /var/lib/pgqsl
/dev/sda2 30G 19G 9.5G 67% /var/lib/named
/dev/sda2 30G 19G 9.5G 67% /var/lib/mailman
/dev/sda2 30G 19G 9.5G 67% /var/crash
/dev/sda2 30G 19G 9.5G 67% /usr/local
/dev/sda2 30G 19G 9.5G 67% /tmp
/dev/sda2 30G 19G 9.5G 67% /srv
/dev/sda2 30G 19G 9.5G 67% /opt
/dev/sda2 30G 19G 9.5G 67% /boot/grub2/x86_64-efi
/dev/sda2 30G 19G 9.5G 67% /boot/grub2/i386-pc
/dev/sda3 44G 1.8G 42G 5% /home
Hi
Have you configured snapper? Sounds like lots of snapshots…
btrfs filesystem show /
btrfs filesystem df /
snapper list
If the last command produces a long list, then look at configuring the
snapper config.
–
Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.1 Kernel 3.11.10-17-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!
btrfs filesystem show /
Label: none uuid: 77908c59-cc51-4d97-b4c6-ff12e2046cdd
Total devices 1 FS bytes used 17.18GiB
devid 1 size 29.19GiB used 20.04GiB path /dev/sda2
btrfs filesystem df /
Data, single: total=17.01GiB, used=16.40GiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=1.50GiB, used=801.88MiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=272.00MiB, used=0.00B
snapper list
Type | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+-----+-------+--------------------------+------+----------+-------------------+--------------
single | 0 | | | root | | current |
pre | 31 | | Sat Jun 28 10:50:59 2014 | root | number | zypp(zypper) | important=yes
pre | 32 | | Sat Jun 28 10:56:59 2014 | root | number | zypp(packagekitd) | important=yes
pre | 33 | | Sat Jun 28 11:10:07 2014 | root | number | zypp(zypper) | important=yes
pre | 41 | | Sat Jun 28 11:53:13 2014 | root | number | zypp(zypper) | important=yes
pre | 42 | | Sat Jun 28 12:04:14 2014 | root | number | zypp(zypper) | important=yes
pre | 46 | | Sat Jun 28 15:03:05 2014 | root | number | zypp(zypper) | important=yes
post | 47 | 46 | Sat Jun 28 15:13:37 2014 | root | number | | important=yes
single | 698 | | Mon Jul 21 00:30:01 2014 | root | timeline | timeline |
single | 724 | | Tue Jul 22 00:30:01 2014 | root | timeline | timeline |
single | 737 | | Tue Jul 22 13:30:01 2014 | root | timeline | timeline |
single | 738 | | Tue Jul 22 14:30:01 2014 | root | timeline | timeline |
single | 739 | | Tue Jul 22 15:30:01 2014 | root | timeline | timeline |
single | 740 | | Tue Jul 22 16:30:02 2014 | root | timeline | timeline |
single | 741 | | Tue Jul 22 17:30:01 2014 | root | timeline | timeline |
pre | 742 | | Tue Jul 22 18:23:46 2014 | root | number | zypp(zypper) | important=yes
single | 743 | | Tue Jul 22 18:30:02 2014 | root | timeline | timeline |
single | 744 | | Tue Jul 22 19:30:02 2014 | root | timeline | timeline |
single | 745 | | Tue Jul 22 20:30:09 2014 | root | timeline | timeline |
post | 746 | 742 | Tue Jul 22 20:41:20 2014 | root | number | | important=yes
pre | 751 | | Tue Jul 22 21:23:37 2014 | root | number | zypp(packagekitd) | important=yes
pre | 754 | | Tue Jul 22 21:33:52 2014 | root | number | yast lan |
post | 755 | 754 | Tue Jul 22 21:34:50 2014 | root | number | |
pre | 756 | | Tue Jul 22 21:36:23 2014 | root | number | yast lan |
post | 757 | 756 | Tue Jul 22 21:38:09 2014 | root | number | |
pre | 758 | | Tue Jul 22 21:44:19 2014 | root | number | yast add-on |
single | 759 | | Tue Jul 22 21:45:02 2014 | root | timeline | timeline |
pre | 760 | | Tue Jul 22 21:46:10 2014 | root | number | zypp(y2base) | important=no
post | 761 | 760 | Tue Jul 22 21:46:31 2014 | root | number | | important=no
post | 762 | 758 | Tue Jul 22 21:47:45 2014 | root | number | |
single | 763 | | Tue Jul 22 22:45:02 2014 | root | timeline | timeline |
single | 764 | | Tue Jul 22 23:45:02 2014 | root | timeline | timeline |
single | 765 | | Wed Jul 23 00:45:01 2014 | root | timeline | timeline |
single | 766 | | Wed Jul 23 01:45:01 2014 | root | timeline | timeline |
single | 767 | | Wed Jul 23 02:45:01 2014 | root | timeline | timeline |
single | 768 | | Wed Jul 23 03:45:01 2014 | root | timeline | timeline
Hi
That output looks all good, you probably need to have a read of this…
https://btrfs.wiki.kernel.org/index.php/FAQ#Why_does_df_show_incorrect_free_space_for_my_RAID_volume.3F
Whilst RAID centric, same thing, but basically df -k on btrfs is/can be misleading.
AFAIK in btrfs, once space is allocated it doesn’t release it, however it may not be in use. But after the update snapshot rolls over the filesystem df / output should drop under Data used=.