btrfs issue: advices needed

Greetings !!

I assume I did an error to install my 13.2 openSuSE with btrfs filesystem on root directory…
For now I need advices concerning the situation I’m in and the “solutions” I consider.

Situation:

I have a ssd drive used by grub in dualboot with Windows 7 (I know, nobody is perfect, I need Windows for doing music)
The grub is located in /dev/sda1 on the 120Go ssd drive.

I have more than 30 Go for the root filesystem ‘/’… I was thinking it was enough… but I was wrong :{

I want to use a new 240 Go ssd drive /dev/sdc to “extend” the original root filesystem… but I have some questions and fears.

When I saw that the root filesystem hat 85% of disk space I went to /root directory to move all the personnal datas on another partition…
But df -h nore btrfs fi df -h / indicated any change → there were more than 4 Go of data moved, I had supposed that I would drop from
6.2 Go of free disk space to 15Go at least… but it wasn’t the case…

What is wrong with my conception on “moving datas from a partition to another” ?

I have no sufficient knowledge of the btrfs filesystem and I’m starting slowly to ask myself to find a solution to switch my system back to ext4.
But for now, I’m in a hurry so I have to find a way to avoid the system to crash as the “episode” when btrfs snapshots took the whole partition and messed up my beautiful 13.2 system.

I configured snapper like this:

# subvolume to snapshot
SUBVOLUME="/"

# filesystem type
FSTYPE="btrfs"   


# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""

# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"


# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"


# run daily number cleanup
NUMBER_CLEANUP="yes"

# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="1"
NUMBER_LIMIT_IMPORTANT="2"


# create hourly snapshots
TIMELINE_CREATE="no"

# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"

# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="2"
TIMELINE_LIMIT_DAILY="15"
TIMELINE_LIMIT_MONTHLY="2"
TIMELINE_LIMIT_YEARLY="1" 


# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"  

# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"

The main idea, but I don’t know if is it a good idea, is to:

  • clone the “/” directory to the 240 Go ssd using the btrfs utilities that seems to feature such operations, I flew over some tutorials but don’t want to try if not accurate in this context.
  • use the original 40 Go ssd partition to link it with **/root.

The underlaying questions **are:

Is that a good idea ?
Results of fdisk -l command
http://paste.opensuse.org/70205041

View of the** gparted**application
http://paste.opensuse.org/75206360

Thank you for your patience and enlightments…

Well, you could disable snapshots completely, then 30GB on / should be plenty also with btrfs and there’s no danger to fill it up with snapshots.

Either uninstall snapper completely, or see here (section “Customize the setup”->“Disabling/Enabling Snapshots”)
https://www.suse.com/documentation/sles-12/book_sle_admin/index.html?page=/documentation/sles-12/book_sle_admin/data/sec_snapper_setup.html

Of course you cannot take advantage of snapshots then, but you can’t with ext4 either (although snapper apparently does have support for ext4 too).

More informations concerning the “features” I was talking about without any precisions: I may use the “devid” feature to extend a btrfs partition to a partition located on another device.

I got these results when I execute btrfs filesystem show


Label: none  uuid: 32a7e23d-5a7f-43ec-8f71-b3144fd4b886
        Total devices 1 FS bytes used 33.10GiB
        devid    1 size 40.00GiB used 40.00GiB path /dev/sda6

Label: none  uuid: adb86e35-7aae-4575-9874-673d4a251fe1
        Total devices 1 FS bytes used 2.52GiB
        devid    1 size 11.76GiB used 11.76GiB path /dev/sda7

Label: 'DATAS1'  uuid: 2cb28a00-35b4-4f22-95c8-2781dee5b201
        Total devices 1 FS bytes used 67.50GiB
        devid    1 size 256.00GiB used 71.04GiB path /dev/sdb1

In my idea I want to use /dev/sdc to extend the ‘/’ mounted point located on /dev/sda6for now.

I saw that if /dev/sdc is btrfs I can do commands like: btrfs filesystel resize but it seems I can not do this from one single to multiple…

I will first disable snapshots.
I think I must have started by that.

Should I use btrfs device add to add a new partition and link it with the ‘/’ mounted point ?

Another strange thing that makes me believe I would never understand the “snapshot” feature of btrfs ^^

These are the results of the command du -hsc on the /.snapshots folder


19G     1385
17G     1386
20G     1705
21G     1706
20G     1735
20G     1736
17G     1737
17G     1738
4.0K    983
4.0K    grub-snapshot.cfg
147G    total

How all this stuff is stored on a 120 Go ssd ?
Can I use a command to check how it works ? The numbers are probably connected to the “snapper” application

These are the snapper list results:

single | 1385 |       | Thu May 12 18:44:03 2016 | root |         |                   |              
single | 1386 |       | Thu May 12 18:44:05 2016 | root |         |                   |              
pre    | 1705 |       | Mon Aug 29 18:11:21 2016 | root | number  | zypp(packagekitd) | important=yes
post   | 1706 | 1705  | Mon Aug 29 18:14:03 2016 | root | number  |                   | important=yes
pre    | 1735 |       | Sun Sep  4 18:51:59 2016 | root | number  | zypp(packagekitd) | important=no 
post   | 1736 | 1735  | Sun Sep  4 18:52:18 2016 | root | number  |                   | important=no 
pre    | 1737 |       | Wed Sep  7 14:53:41 2016 | root | number  | yast snapper      |              
post   | 1738 | 1737  | Wed Sep  7 14:54:40 2016 | root | number  |                   |     

I’m very surprised of the size of each sub folder…
May I delete all those snapshots ?

snapper essential keeps any changed block of disk memory. That is not the same as taking an image of all files it only keeps what is changed. But you see the size of what would be restored not what is stored in the first list so that is not real storage just virtual.

do not just delete you must use snapper to maintain snapper data but they can be removed with snapper commands.

30 gig is a bit short for BTRFS with snapper 40 gig is the recommended. But you can change the snapshot frequency and how much to keep with the snapper setup instruction

I will rearrange my question: how can I do to extend a btrfs partition /dev/sda6 with another btrfs partition /dev/sdc1 ?

Is it possible ? I think yes as I flew over some interresting informations but as they were related to the RAID technology I didn’t get deeper.

The btrfs-device add command seems to be the solution needed (and not related exclusively for RAID) but I want to be sure of what I’m about to do.

cyrius:/ # mkfs.btrfs -d single -L extended /dev/sdc1
btrfs-progs v4.5.3+20160516
See http://btrfs.wiki.kernel.org for more information.

Detected a SSD, turning off metadata duplication.  Mkfs with -m dup if you want to force metadata duplication.
Performing full device TRIM (150.00GiB) ...
Label:              extended
UUID:               5c7ab5bc-5e10-423c-b90f-7daa86b4aed6
Node size:          16384
Sector size:        4096
Filesystem size:    150.00GiB
Block group profiles:
  Data:             single            8.00MiB
  Metadata:         single            8.00MiB
  System:           single            4.00MiB
SSD detected:       yes
Incompat features:  extref, skinny-metadata
Number of devices:  1
Devices:
   ID        SIZE  PATH
    1   150.00GiB  /dev/sdc1


It wasn’t necessary but at this point I wasn’t awared of that…
The point is to “simply” create new partitions using fdisk

cyrius:/ # btrfs device usage /
/dev/sda6, ID: 1
   Device size:            40.00GiB
   Data,single:            37.99GiB
   Metadata,single:         2.01GiB
   System,single:           4.00MiB
   Unallocated:               0.00B

Just to see what was the starting point…

cyrius:/ # btrfs device add /dev/sdc1 /
/dev/sdc1 appears to contain an existing filesystem (btrfs).
Use the -f option to force overwrite.

Yes here we are…

cyrius:/ # btrfs device add -f /dev/sdc1 /
Performing full device TRIM (150.00GiB) ...

Instantly added… updating structures in the file system tree.

cyrius:/ # btrfs device usage /           
/dev/sda6, ID: 1
   Device size:            40.00GiB
   Data,single:            37.99GiB
   Metadata,single:         2.01GiB
   System,single:           4.00MiB
   Unallocated:               0.00B

/dev/sdc1, ID: 2
   Device size:           150.00GiB
   Unallocated:           150.00GiB

…it smells good, very good ^^

cyrius:/ # btrfs filesystem show
Label: none  uuid: 32a7e23d-5a7f-43ec-8f71-b3144fd4b886
        Total devices 2 FS bytes used 33.17GiB
        devid    1 size 40.00GiB used 40.00GiB path /dev/sda6
        devid    2 size 150.00GiB used 0.00B path /dev/sdc1

Label: none  uuid: adb86e35-7aae-4575-9874-673d4a251fe1
        Total devices 1 FS bytes used 2.52GiB
        devid    1 size 11.76GiB used 11.76GiB path /dev/sda7

Label: 'DATAS1'  uuid: 2cb28a00-35b4-4f22-95c8-2781dee5b201
        Total devices 1 FS bytes used 67.50GiB
        devid    1 size 256.00GiB used 71.04GiB path /dev/sdb1


…it’s smells VERY GOOD ^^

cyrius:/ # df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/sda6                    191G   34G  157G  18% /
devtmpfs                     7.9G  8.0K  7.9G   1% /dev
tmpfs                        7.9G   96K  7.9G   1% /dev/shm
tmpfs                        7.9G  2.3M  7.9G   1% /run
tmpfs                        7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/sda6                    191G   34G  157G  18% /var/spool
/dev/sda7                     12G  2.6G  9.1G  22% /home
/dev/sda6                    191G   34G  157G  18% /tmp
/dev/sda6                    191G   34G  157G  18% /opt
/dev/sda6                    191G   34G  157G  18% /.snapshots
/dev/sda6                    191G   34G  157G  18% /var/tmp
/dev/sda6                    191G   34G  157G  18% /var/opt
/dev/sda6                    191G   34G  157G  18% /var/lib/pgsql
/dev/sda6                    191G   34G  157G  18% /srv
/dev/sda6                    191G   34G  157G  18% /var/lib/named
/dev/sda6                    191G   34G  157G  18% /var/lib/mailman
/dev/sda6                    191G   34G  157G  18% /boot/grub2/x86_64-efi
/dev/sda6                    191G   34G  157G  18% /boot/grub2/i386-pc
/dev/sda6                    191G   34G  157G  18% /var/crash
/dev/sda6                    191G   34G  157G  18% /usr/local
/dev/sda6                    191G   34G  157G  18% /var/log
/dev/sdb2                    210G  139G   72G  67% /datas2
/dev/sdb1                    256G   68G  187G  27% /datas1
//192.168.0.3/exercices       14G  9.1G  4.8G  66% /datas1/nfsshares/exercices
192.168.0.3:/windows/Datas1  160G   34G  127G  22% /datas1/nfsshares/Datas1
192.168.0.3:/windows/Datas3  380G   53G  327G  14% /datas1/nfsshares/Datas3
192.168.0.3:/windows/Ntfs2   250G  239G   12G  96% /datas1/nfsshares/Ntfs2
192.168.0.3:/windows/System  4.3G  3.4G  850M  81% /datas1/nfsshares/System
192.168.0.3:/windows/Datas2  306G  234G   73G  77% /datas1/nfsshares/Datas2
192.168.0.3:/windows/Ntfs1   513G  431G   82G  84% /datas1/nfsshares/Ntfs1
192.168.0.3:/windows/Linux   276G  201G   76G  73% /datas1/nfsshares/Linux

Now I’m happy ^^

Is that a good idea to put the root user folder (/root) to another partition in /etc/fstab ?

I have a partition /dev/sdc2 that was about to be used as the root user folder…

it works…

You could also just force snapper to limit the amount of snapshots to 10 by doing the following:
Open the terminal

  1. become root
    **
su 

**
2) open the snapper configuration file with

**leafpad /etc/snapper/configs/root**

The file will open and search for the line that contains

NUMBER_MIN_AGE=1800

and change it in

NUMBER_MIN_AGE=0

Now safe the file and close it, but keep the terminal open for the big clean up.

  1. In the terminal type
**snapper cleanup number**

You might want to wait for a while if you installed and removed a lot of software and updates.

After this all you need to do is update Grub with

**
grub2-mkconfig -o /boot/grub2/grub.cfg**

The system now stays clean as you will only have a maximum of 10 snapshot available.