Page 1 of 4 123 ... LastLast
Results 1 to 10 of 40

Thread: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

  1. #1
    Join Date
    Jun 2017
    Location
    Australia
    Posts
    214

    Default Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    Hi

    Tumbleweed 20170712 [at the moment], Root = 50 GB Btrfs.

    I had a shock today when i serendipitously checked Dolphin for something but then noticed my 50 GB / partition had only 3.3 GB free space remaining. Wondering what had happened, i then discovered that somehow [some time ago, don't know when or how] i seemed to have accidentally duplicated a large [~30 GB] directory from my separate /home partition, in / as well. Realising that was redundant, i deleted it via Root Actions in Dolphin, but was confused to see that my / free space remained at only 3.3 GB.

    Searching this forum for clues, as i suspected that Btrfs & Snapper might be the cause via excessive Snapshots, i found many helpful threads & posts, including:
    1. https://forums.opensuse.org/showthre...21#post2823621
    2. https://forums.opensuse.org/showthre...96#post2825196
    3. https://forums.opensuse.org/showthre...10#post2800310
    4. https://forums.opensuse.org/showthre...54#post2826854


    My initial Snapshot status:
    Code:
    linux-763v:~> sudo snapper list
    [sudo] password for root:  
    Type   | #   | Pre # | Date                          | User | Cleanup | Description           | Userdata      
    -------+-----+-------+-------------------------------+------+---------+-----------------------+--------------
    single | 0   |       |                               | root |         | current               |               
    pre    | 144 |       | Thu 22 Jun 2017 14:10:21 AEST | root | number  | zypp(zypper)          | important=yes
    post   | 145 | 144   | Thu 22 Jun 2017 14:25:28 AEST | root | number  |                       | important=yes
    pre    | 335 |       | Fri 30 Jun 2017 11:37:45 AEST | root | number  | zypp(zypper)          | important=yes
    post   | 336 | 335   | Fri 30 Jun 2017 12:03:17 AEST | root | number  |                       | important=yes
    pre    | 351 |       | Sun 02 Jul 2017 14:58:19 AEST | root | number  | zypp(zypper)          | important=yes
    post   | 352 | 351   | Sun 02 Jul 2017 15:07:50 AEST | root | number  |                       | important=yes
    single | 422 |       | Tue 04 Jul 2017 18:32:26 AEST | root | number  | rollback backup of #1 | important=yes
    single | 423 |       | Tue 04 Jul 2017 18:32:27 AEST | root |         |                       |               
    pre    | 426 |       | Tue 04 Jul 2017 18:47:57 AEST | root | number  | zypp(zypper)          | important=yes
    post   | 427 | 426   | Tue 04 Jul 2017 19:01:28 AEST | root | number  |                       | important=yes
    pre    | 481 |       | Mon 10 Jul 2017 16:34:42 AEST | root | number  | zypp(zypper)          | important=yes
    post   | 482 | 481   | Mon 10 Jul 2017 17:03:20 AEST | root | number  |                       | important=yes
    pre    | 523 |       | Sun 16 Jul 2017 16:07:06 AEST | root | number  | yast firewall         |               
    post   | 524 | 523   | Sun 16 Jul 2017 16:14:14 AEST | root | number  |                       |               
    pre    | 525 |       | Sun 16 Jul 2017 16:14:27 AEST | root | number  | yast firewall         |               
    post   | 526 | 525   | Sun 16 Jul 2017 16:14:36 AEST | root | number  |                       |               
    pre    | 527 |       | Sun 16 Jul 2017 16:15:42 AEST | root | number  | yast sw_single        |               
    pre    | 528 |       | Sun 16 Jul 2017 16:17:47 AEST | root | number  | zypp(ruby)            | important=no  
    post   | 529 | 527   | Sun 16 Jul 2017 16:17:55 AEST | root | number  |                       |               
    pre    | 530 |       | Sun 16 Jul 2017 16:18:43 AEST | root | number  | yast sw_single        |               
    pre    | 531 |       | Sun 16 Jul 2017 16:19:35 AEST | root | number  | zypp(ruby)            | important=no  
    post   | 532 | 531   | Sun 16 Jul 2017 16:19:39 AEST | root | number  |                       | important=no  
    post   | 533 | 530   | Sun 16 Jul 2017 16:19:51 AEST | root | number  |                       |               
    pre    | 534 |       | Sun 16 Jul 2017 22:06:18 AEST | root | number  | yast sw_single        |               
    post   | 535 | 534   | Sun 16 Jul 2017 22:09:01 AEST | root | number  |                       |               
    pre    | 536 |       | Mon 17 Jul 2017 17:52:46 AEST | root | number  | yast snapper          |               
    linux-763v:~>
    
    Looking in
    Code:
    /etc/cron.weekly
    i was [& still am] surprised that it was empty. Therefore i checked:
    Code:
    linux-763v:~> zypper info btrfsmaintenance
    Loading repository data...
    Reading installed packages...
    
    
    Information for package btrfsmaintenance:
    -----------------------------------------
    Repository     : Main Repository (OSS)                        
    Name           : btrfsmaintenance                             
    Version        : 0.3.1-2.1                                    
    Arch           : noarch                                       
    Vendor         : openSUSE                                     
    Installed Size : 46.3 KiB                                     
    Installed      : Yes                                          
    Status         : up-to-date                                   
    Source package : btrfsmaintenance-0.3.1-2.1.src               
    Summary        : Scripts for btrfs periodic maintenance tasks
    Description    :                                              
        Scripts for btrfs maintenance tasks like periodic scrub, balance, trim or defrag
        on selected mountpoints or directories.
    
    Question: How can this package be already installed yet that preceding directory be empty?

    Following the referenced links i then forced its reinstallation:
    Code:
    linux-763v:~> sudo zypper in -f btrfsmaintenance
    [sudo] password for root:  
    Loading repository data...
    Reading installed packages...
    Forcing installation of 'btrfsmaintenance-0.3.1-2.1.noarch' from repository 'Main Repository (OSS)'.
    Resolving package dependencies...
    
    The following package is going to be reinstalled:
      btrfsmaintenance
    
    1 package to reinstall.
    Overall download size: 30.6 KiB. Already cached: 0 B. No additional space will be used or freed after the operation.
    Continue? [y/n/...? shows all options] (y): 
    Retrieving package btrfsmaintenance-0.3.1-2.1.noarch                                            (1/1),  30.6 KiB ( 46.3 KiB unpacked)
    Retrieving: btrfsmaintenance-0.3.1-2.1.noarch.rpm .................................................................[done (4.7 KiB/s)]
    Checking for file conflicts: ..................................................................................................[done]
    (1/1) Installing: btrfsmaintenance-0.3.1-2.1.noarch ...........................................................................[done]
    Additional rpm output:
    Updating /etc/sysconfig/btrfsmaintenance ...
    Refresh script btrfs-scrub.sh for monthly
    Refresh script btrfs-defrag.sh for none
    Refresh script btrfs-balance.sh for weekly
    Refresh script btrfs-trim.sh for none
    
    ...& was gratified to see that now btrfs-balance appeared in
    Code:
    /etc/cron.weekly
    I confirmed that yes my / really is almost full:
    Code:
    linux-763v:~> sudo btrfs filesystem show /
    [sudo] password for root:  
    Label: none  uuid: 59c063db-fa0d-4e1e-baa2-df255f4262fb
            Total devices 1 FS bytes used 46.54GiB
            devid    1 size 50.00GiB used 49.96GiB path /dev/sda3
    
    The first of these commands, after a lag, presumably completed ok [given no error msg], but the second one worries me:
    Code:
    linux-763v:~> systemctl start btrfsmaintenance-refresh.service
    linux-763v:~> systemctl status btrfsmaintenance-refresh.service
    ● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
       Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; disabled; vendor preset: disabled)
       Active: inactive (dead)
    linux-763v:~>
    
    Question: WHY would it be inactive / disabled / dead, & how should i remedy this?

    Before next manually running the two applicable cron-jobs to get back my free space, i inspected /etc/snapper/configs/root. Here it is, still as-found, apart from the obvious small edit i made as per another of Malcolm's links above:
    Code:
    # subvolume to snapshot
    SUBVOLUME="/"
    
    
    # filesystem type
    FSTYPE="btrfs"
    
    
    # btrfs qgroup for space aware cleanup algorithms
    QGROUP="1/0"
    
    
    # fraction of the filesystems space the snapshots may use
    SPACE_LIMIT="0.5"
    
    
    # users and groups allowed to work with config
    ALLOW_USERS=""
    ALLOW_GROUPS=""
    
    
    # sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
    # directory
    SYNC_ACL="no"
    
    
    # start comparing pre- and post-snapshot in background after creating
    # post-snapshot
    BACKGROUND_COMPARISON="yes"
    
    
    # run daily number cleanup
    NUMBER_CLEANUP="yes"
    
    
    # limit for number cleanup
    NUMBER_MIN_AGE="1800"
    #NUMBER_LIMIT="2-10"
    #NUMBER_LIMIT_IMPORTANT="4-10"
    # 17/7/17: I reduced the preceding as per Malcolm's https://forums.opensuse.org/showthread.php/521072-Running-out-of-space-in-root-partition?p=2800310#post2800310
    NUMBER_LIMIT="2-3"
    NUMBER_LIMIT_IMPORTANT="3-5"
    
    
    # create hourly snapshots
    TIMELINE_CREATE="no"
    
    
    # cleanup hourly snapshots after some time
    TIMELINE_CLEANUP="yes"
    
    
    # limits for timeline cleanup
    TIMELINE_MIN_AGE="1800"
    TIMELINE_LIMIT_HOURLY="10"
    TIMELINE_LIMIT_DAILY="10"
    TIMELINE_LIMIT_WEEKLY="0"
    TIMELINE_LIMIT_MONTHLY="10"
    TIMELINE_LIMIT_YEARLY="10"
    
    
    # cleanup empty pre-post-pairs
    EMPTY_PRE_POST_CLEANUP="yes"
    
    
    # limits for empty pre-post-pair cleanup
    EMPTY_PRE_POST_MIN_AGE="1800"
    Question: Should i change any other of the default settings in that file pls? [This installation is just on a personal laptop, with ~250 GB SSD].

    Next i manually ran both cron-jobs:
    Code:
    linux-763v:~> sudo /etc/cron.daily/suse.de-snapper
    [sudo] password for root:  
    linux-763v:~> sudo /etc/cron.weekly/btrfs-balance
    Before balance of /
    Data, single: total=37.17GiB, used=14.32GiB
    System, single: total=32.00MiB, used=16.00KiB
    Metadata, single: total=768.00MiB, used=379.11MiB
    GlobalReserve, single: total=41.16MiB, used=0.00B
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda3        54G   16G   38G  30% /
    Done, had to relocate 0 out of 43 chunks
    Dumping filters: flags 0x1, state 0x0, force is off
      DATA (flags 0x2): balancing, usage=1
    Done, had to relocate 5 out of 43 chunks
    Dumping filters: flags 0x1, state 0x0, force is off
      DATA (flags 0x2): balancing, usage=5
    Done, had to relocate 8 out of 38 chunks
    Dumping filters: flags 0x1, state 0x0, force is off
      DATA (flags 0x2): balancing, usage=10
    Done, had to relocate 0 out of 30 chunks
    Dumping filters: flags 0x1, state 0x0, force is off
      DATA (flags 0x2): balancing, usage=20
    Done, had to relocate 1 out of 30 chunks
    Dumping filters: flags 0x1, state 0x0, force is off
      DATA (flags 0x2): balancing, usage=30
    Done, had to relocate 2 out of 29 chunks
    Dumping filters: flags 0x1, state 0x0, force is off
      DATA (flags 0x2): balancing, usage=40
    Done, had to relocate 0 out of 27 chunks
    Dumping filters: flags 0x1, state 0x0, force is off
      DATA (flags 0x2): balancing, usage=50
    Done, had to relocate 1 out of 27 chunks
    Done, had to relocate 0 out of 26 chunks
    Dumping filters: flags 0x6, state 0x0, force is off
      METADATA (flags 0x2): balancing, usage=1
      SYSTEM (flags 0x2): balancing, usage=1
    Done, had to relocate 1 out of 26 chunks
    Dumping filters: flags 0x6, state 0x0, force is off
      METADATA (flags 0x2): balancing, usage=5
      SYSTEM (flags 0x2): balancing, usage=5
    Done, had to relocate 1 out of 26 chunks
    Dumping filters: flags 0x6, state 0x0, force is off
      METADATA (flags 0x2): balancing, usage=10
      SYSTEM (flags 0x2): balancing, usage=10
    Done, had to relocate 1 out of 26 chunks
    Dumping filters: flags 0x6, state 0x0, force is off
      METADATA (flags 0x2): balancing, usage=20
      SYSTEM (flags 0x2): balancing, usage=20
    Done, had to relocate 1 out of 26 chunks
    Dumping filters: flags 0x6, state 0x0, force is off
      METADATA (flags 0x2): balancing, usage=30
      SYSTEM (flags 0x2): balancing, usage=30
    Done, had to relocate 1 out of 26 chunks
    After balance of /
    Data, single: total=20.17GiB, used=14.32GiB
    System, single: total=32.00MiB, used=16.00KiB
    Metadata, single: total=768.00MiB, used=377.98MiB
    GlobalReserve, single: total=39.06MiB, used=0.00B
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda3        54G   16G   38G  30% /
    linux-763v:~> 
    
    Question: Do those outputs look ok pls?

    <<continued in follow-on post>>

  2. #2
    Join Date
    Jun 2017
    Location
    Australia
    Posts
    214

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    It seemed to have done some good:
    Code:
    linux-763v:~> sudo btrfs filesystem show /                      
    [sudo] password for root:                      
    Label: none  uuid: 59c063db-fa0d-4e1e-baa2-df255f4262fb 
            Total devices 1 FS bytes used 14.69GiB                                     
            devid    1 size 50.00GiB used 20.96GiB path /dev/sda3 
    linux-763v:~>
    ...with this being the new status of the Snapshots:
    Code:
    linux-763v:~> sudo snapper list                                 
    [sudo] password for root:  
    Type   | #   | Pre # | Date                          | User | Cleanup | Description           | Userdata      
    -------+-----+-------+-------------------------------+------+---------+-----------------------+--------------
    single | 0   |       |                               | root |         | current               |               
    single | 422 |       | Tue 04 Jul 2017 18:32:26 AEST | root | number  | rollback backup of #1 | important=yes
    single | 423 |       | Tue 04 Jul 2017 18:32:27 AEST | root |         |                       |               
    pre    | 426 |       | Tue 04 Jul 2017 18:47:57 AEST | root | number  | zypp(zypper)          | important=yes
    post   | 427 | 426   | Tue 04 Jul 2017 19:01:28 AEST | root | number  |                       | important=yes
    pre    | 481 |       | Mon 10 Jul 2017 16:34:42 AEST | root | number  | zypp(zypper)          | important=yes
    post   | 482 | 481   | Mon 10 Jul 2017 17:03:20 AEST | root | number  |                       | important=yes
    pre    | 540 |       | Mon 17 Jul 2017 18:08:41 AEST | root | number  | yast sw_single        |               
    post   | 541 | 540   | Mon 17 Jul 2017 18:09:58 AEST | root | number  |                       |               
    pre    | 542 |       | Mon 17 Jul 2017 18:34:33 AEST | root | number  | zypp(zypper)          | important=no  
    post   | 543 | 542   | Mon 17 Jul 2017 18:34:40 AEST | root | number  |                       | important=no  
    linux-763v:~>

    I do hope that the things i did tonight, per all the above, were correct... but even if true, i remain confused about the various "surprising" discoveries i made... hence would really appreciate it if my specific questions inserted above could be answered by more experienced users here pls.

  3. #3
    Join Date
    Apr 2017
    Location
    Piemont Italy
    Posts
    128

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    I can not help you, but I can tell you that I do not trust Brtfs.
    I used it one by one because it was defaulted by Opensuse.
    In my opinion, btrfs should default to work as ext4, do not make any snapshots, so whoever puts it in trouble will do it ... if you want, impose Default is wrong

  4. #4
    Join Date
    Feb 2010
    Location
    Germany
    Posts
    1,030

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    Quote Originally Posted by GooeyGirl View Post
    Question: How can this package be already installed yet that preceding directory be empty?
    The software engine behind openSUSE's and SUSE Enterprise Edition's ZYpp is Red Hat's RPM -- which has a database -- which sometimes, occasionally, not very often, needs to be rebuilt.
    • Further information related to the RPM database is available from the rpmdb man (8) pages.
    • Additional information is available from Carla Schroder's "Linux Cookbook" published by O'Reilly: ISBN: 0-596-00640-3

    Regardless of whether the RPM database needs to be rebuilt or not, there are tools to verify the consistency between the RPM database, package dependencies and, the installed files and directory structures needed the packages mentioned in the database:
    Code:
     > zypper verify --details
    Repository 'Packman Repository' is out-of-date. You can run 'zypper refresh' as root to update it.
    Loading repository data...
    Reading installed packages...
    
    Dependencies of all installed packages are satisfied.
     >
    
     # rpm --verify --all
    Please note that, the RPM verify should be executed by the 'root' user and, the output is more complete than that provided by ZYpp.

    Quote Originally Posted by GooeyGirl View Post
    Question: WHY would it be inactive / disabled / dead, & how should i remedy this?
    Search for the state of the suspect systemd units by means of the following (normal user) CLI commands:
    1. "systemctl list-unit-files" -- the output is "less" (or "more") and, can be searched with "/" . . .
    2. "systemctl status <suspect unit>

    Then, with a "root" user CLI: "systemctl enable <unit>"; "systemctl start <unit>"; "systemctl status <unit>".
    Take note of any messages (which may be errors) presented by the systemctl "start" and "status" commands.

    Quote Originally Posted by GooeyGirl View Post
    Question: Should i change any other of the default settings in that file pls? [This installation is just on a personal laptop, with ~250 GB SSD].
    Normally no -- except if, the default really, REALLY, doesn't fit to the concerned hardware and/or system configuration . . .

    Quote Originally Posted by GooeyGirl View Post
    Question: Do those outputs look ok pls?
    Yes.
    Personally, with a 1 TB SSHD, I tend to allocate 80 GB to the Btrfs "/" partition.

  5. #5
    Join Date
    Jun 2017
    Location
    Australia
    Posts
    214

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    Many thanks - this was all a good help.

    Quote Originally Posted by dcurtisfra View Post
    Search for the state of the suspect systemd units by means of the following (normal user) CLI commands:
    1. "systemctl list-unit-files" -- the output is "less" (or "more") and, can be searched with "/" . . .
    2. "systemctl status <suspect unit>

    Then, with a "root" user CLI: "systemctl enable <unit>"; "systemctl start <unit>"; "systemctl status <unit>".
    Take note of any messages (which may be errors) presented by the systemctl "start" and "status" commands.
    Hopefully, all this is ok...
    Code:
    linux-763v:~> sudo systemctl enable btrfsmaintenance-refresh.service[sudo] password for root: 
    Created symlink /etc/systemd/system/multi-user.target.wants/btrfsmaintenance-refresh.service → /usr/lib/systemd/system/btrfsmaintenance-refresh.service.
    
    
    linux-763v:~> sudo systemctl start btrfsmaintenance-refresh.service
    linux-763v:~> sudo systemctl status btrfsmaintenance-refresh.service
    ● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
       Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; enabled; vendor preset: disabled)
       Active: inactive (dead) since Tue 2017-07-18 11:57:12 AEST; 27s ago
      Process: 30044 ExecStart=/usr/share/btrfsmaintenance/btrfsmaintenance-refresh-cron.sh (code=exited, status=0/SUCCESS)
     Main PID: 30044 (code=exited, status=0/SUCCESS)
    
    
    Jul 18 11:57:12 linux-763v systemd[1]: Starting Update cron periods from /etc/sysconfig/btrfsmaintenance...
    Jul 18 11:57:12 linux-763v btrfsmaintenance-refresh-cron.sh[30044]: Refresh script btrfs-scrub.sh for monthly
    Jul 18 11:57:12 linux-763v btrfsmaintenance-refresh-cron.sh[30044]: Refresh script btrfs-defrag.sh for none
    Jul 18 11:57:12 linux-763v btrfsmaintenance-refresh-cron.sh[30044]: Refresh script btrfs-balance.sh for weekly
    Jul 18 11:57:12 linux-763v btrfsmaintenance-refresh-cron.sh[30044]: Refresh script btrfs-trim.sh for none
    Jul 18 11:57:12 linux-763v systemd[1]: Started Update cron periods from /etc/sysconfig/btrfsmaintenance.
    linux-763v:~>
    Quote Originally Posted by dcurtisfra View Post
    Normally no -- except if, the default really, REALLY, doesn't fit to the concerned hardware and/or system configuration . . .
    Good.

    Quote Originally Posted by dcurtisfra View Post
    Yes.
    Personally, with a 1 TB SSHD, I tend to allocate 80 GB to the Btrfs "/" partition.
    Wow! 80 GB - crikey! Mind you, earlier today i created a new oS TW VM in my Maui Tower, specifically so that i could do some Btrfs settings experimentation before doing them "for real" on my TW Lappy. This time i decided to explore the Ruby Installer more thoroughly than i'd done before, & rather than simply selecting the Plasma Desktop directly, i selected Custom, which later in the process let me access a vast array of additional choices, from which as an experiment i enabled ALL of Plasma, Gnome, Xfce, Mate, & Enlightenment desktops... + was also able to fine-tune programs to be installed. How cool is this!!! I was super impressed & delighted, so much so that when i convert my Tower later from Maui to TW i think i'll do the same thing. That in turn makes me suppose that i would need to make its root bigger than normal, not only due to the Btrfs [which i shall certainly use], but also for the [substantial??] extra room needed for those other DEs.

  6. #6
    Join Date
    Jun 2017
    Location
    Australia
    Posts
    214

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    After more research in this forum i discovered that by editing /etc/sysconfig/btrfsmaintenance thus:
    Code:
    ## Path:           System/File systems/btrfs## Description:    Configuration for periodic fstrim
    ## Type:           string(none,daily,weekly,monthly)
    ## Default:        "none"
    ## ServiceRestart: btrfsmaintenance-refresh
    #
    # Frequency of periodic trim. Off by default so it does not collide with
    # fstrim.timer . If you do not use the timer, turn it on here. The recommended
    # period is 'weekly'.
    #BTRFS_TRIM_PERIOD="none"
    BTRFS_TRIM_PERIOD="weekly"
    
    
    ## Path:        System/File systems/btrfs
    ## Description: Configuration for periodic fstrim - mountpoints
    ## Type:        string
    ## Default:     "/"
    #
    # Which mountpoints/filesystems to trim periodically.
    # (Colon separated paths)
    # The special word/mountpoint "auto" will evaluate all mounted btrfs filesystems at runtime
    BTRFS_TRIM_MOUNTPOINTS="/"
    ...& then re-running:
    Code:
    linux-763v:~> sudo systemctl status btrfsmaintenance-refresh.service
    [sudo] password for root:  
    ● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
       Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; enabled; vendor preset: disabled)
       Active: inactive (dead) since Tue 2017-07-18 20:35:46 AEST; 3min 58s ago
      Process: 1499 ExecStart=/usr/share/btrfsmaintenance/btrfsmaintenance-refresh-cron.sh (code=exited, status=0/SUCCESS)
     Main PID: 1499 (code=exited, status=0/SUCCESS)
    
    Jul 18 20:35:46 linux-763v systemd[1]: Starting Update cron periods from /etc/sysconfig/btrfsmaintenance...
    Jul 18 20:35:46 linux-763v btrfsmaintenance-refresh-cron.sh[1499]: Refresh script btrfs-scrub.sh for monthly
    Jul 18 20:35:46 linux-763v btrfsmaintenance-refresh-cron.sh[1499]: Refresh script btrfs-defrag.sh for none
    Jul 18 20:35:46 linux-763v btrfsmaintenance-refresh-cron.sh[1499]: Refresh script btrfs-balance.sh for weekly
    Jul 18 20:35:46 linux-763v btrfsmaintenance-refresh-cron.sh[1499]: Refresh script btrfs-trim.sh for weekly
    Jul 18 20:35:46 linux-763v systemd[1]: Started Update cron periods from /etc/sysconfig/btrfsmaintenance.
    linux-763v:~>
    
    ...so now it appears that cron will take care of automated weekly root trims for me.

    My separate /home partition, unlike root, is ext4 not btrfs, so i was wondering if i should create a new weekly cron job exclusively to trim home? My starting-point was to look at what i did in Mint KDE, & do in Maui (all ext4 partitions)] - /etc/cron.weekly/fstrim:
    Code:
    #!/bin/sh
    # trim all mounted file systems which support it
    /sbin/fstrim --all || true
    So in the new case, being TW with root btrfs already taken care of, would this suffice?
    Code:
    #!/bin/sh
    # trim ext4 home partition
    /sbin/fstrim /home

  7. #7
    Join Date
    Feb 2010
    Location
    Germany
    Posts
    1,030

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    Quote Originally Posted by GooeyGirl View Post
    Hopefully, all this is ok...
    When systemd reports SUCCESS, it means just that . . .

    Quote Originally Posted by GooeyGirl View Post
    Wow! 80 GB - crikey!
    Given that, with a "/" Btrfs partition, "/var/", "/tmp/" and "/srv/" are by default in that partition and, are being managed by Btrfs . . .
    If you want everything to be ticketyboo and hunkydorey with systems which need more than a little bit of <var, tmp, srv> space, 80 GB is not at all extravagant . . .

  8. #8
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    22,711
    Blog Entries
    15

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    Quote Originally Posted by GooeyGirl View Post

    <snip>

    Code:
    #!/bin/sh
    # trim all mounted file systems which support it
    /sbin/fstrim --all || true
    So in the new case, being TW with root btrfs already taken care of, would this suffice?
    Code:
    #!/bin/sh
    # trim ext4 home partition
    /sbin/fstrim /home
    Hi
    Nooooooo.... look at the output of the mount command first.... the --all option (read the man page for fstrim ) will take care of any mounted filesystem if they support discard, assuming you have an SSD and it's not blacklisted (there are a few, eg Samsung), eg;
    Code:
    hdparm -I /dev/sda | grep TRIM
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    openSUSE Leap 42.2 (x86_64) GNOME 3.20.2
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

  9. #9
    Join Date
    Feb 2010
    Location
    Germany
    Posts
    1,030

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    Quote Originally Posted by GooeyGirl View Post
    ...so now it appears that cron will take care of automated weekly root trims for me.
    "btrfs-trim" calls "fstrim" which, AFAIK, is only relevant for SSD drives.

    Quote Originally Posted by GooeyGirl View Post
    My separate /home partition, unlike root, is ext4 not btrfs, so i was wondering if i should create a new weekly cron job exclusively to trim home?
    "btrfs-trim" only calls "fstrim" for the drives with Btrfs partitions.
    Yes, you could create weekly cron jobs which call "fstrim" for each partition in turn -- having more than one call to "fstrim" active at any one point in time could be "interesting" but, is more than likely to be dangerous . . .
    From a "root" user CLI call "fstrim --all --verbose" to see which partitions and or drives will be trimmed. For example, for a partition on a rotating drive on this system:
    Code:
     # fstrim --verbose /home01
    fstrim: /home01: the discard operation is not supported
     #

  10. #10
    Join Date
    Jun 2008
    Location
    Podunk
    Posts
    22,711
    Blog Entries
    15

    Default Re: Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

    Hi
    Also what is the status of...
    Code:
    systemctl status fstrim.timer
    Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
    openSUSE Leap 42.2 (x86_64) GNOME 3.20.2
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below... Thanks!

Page 1 of 4 123 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •