Btrfs Root ran out of space, via this forum i fixed it, are my settings ok pls?

Hi

Tumbleweed 20170712 [at the moment], Root = 50 GB Btrfs.

I had a shock today when i serendipitously checked Dolphin for something but then noticed my 50 GB / partition had only 3.3 GB free space remaining. Wondering what had happened, i then discovered that somehow [some time ago, don’t know when or how] i seemed to have accidentally duplicated a large ~30 GB] directory from my separate /home partition, in / as well. Realising that was redundant, i deleted it via Root Actions in Dolphin, but was confused to see that my / free space remained at only 3.3 GB.

Searching this forum for clues, as i suspected that Btrfs & Snapper might be the cause via excessive Snapshots, i found many helpful threads & posts, including:

  1. https://forums.opensuse.org/showthread.php/524929-Disk-claims-to-be-full-even-after-deleting-files?p=2823621#post2823621
  2. https://forums.opensuse.org/showthread.php/525180-Regularly-Running-Outa-Disk-Space?p=2825196#post2825196
  3. https://forums.opensuse.org/showthread.php/521072-Running-out-of-space-in-root-partition?p=2800310#post2800310
  4. https://forums.opensuse.org/showthread.php/525459-Tumbleweed-snapshots-prevent-booting-(by-using-all-disk-space)?p=2826854#post2826854

My initial Snapshot status:

linux-763v:~> sudo snapper list
[sudo] password for root:  
Type   | #   | Pre # | Date                          | User | Cleanup | Description           | Userdata      
-------+-----+-------+-------------------------------+------+---------+-----------------------+--------------
single | 0   |       |                               | root |         | current               |               
pre    | 144 |       | Thu 22 Jun 2017 14:10:21 AEST | root | number  | zypp(zypper)          | important=yes
post   | 145 | 144   | Thu 22 Jun 2017 14:25:28 AEST | root | number  |                       | important=yes
pre    | 335 |       | Fri 30 Jun 2017 11:37:45 AEST | root | number  | zypp(zypper)          | important=yes
post   | 336 | 335   | Fri 30 Jun 2017 12:03:17 AEST | root | number  |                       | important=yes
pre    | 351 |       | Sun 02 Jul 2017 14:58:19 AEST | root | number  | zypp(zypper)          | important=yes
post   | 352 | 351   | Sun 02 Jul 2017 15:07:50 AEST | root | number  |                       | important=yes
single | 422 |       | Tue 04 Jul 2017 18:32:26 AEST | root | number  | rollback backup of #1 | important=yes
single | 423 |       | Tue 04 Jul 2017 18:32:27 AEST | root |         |                       |               
pre    | 426 |       | Tue 04 Jul 2017 18:47:57 AEST | root | number  | zypp(zypper)          | important=yes
post   | 427 | 426   | Tue 04 Jul 2017 19:01:28 AEST | root | number  |                       | important=yes
pre    | 481 |       | Mon 10 Jul 2017 16:34:42 AEST | root | number  | zypp(zypper)          | important=yes
post   | 482 | 481   | Mon 10 Jul 2017 17:03:20 AEST | root | number  |                       | important=yes
pre    | 523 |       | Sun 16 Jul 2017 16:07:06 AEST | root | number  | yast firewall         |               
post   | 524 | 523   | Sun 16 Jul 2017 16:14:14 AEST | root | number  |                       |               
pre    | 525 |       | Sun 16 Jul 2017 16:14:27 AEST | root | number  | yast firewall         |               
post   | 526 | 525   | Sun 16 Jul 2017 16:14:36 AEST | root | number  |                       |               
pre    | 527 |       | Sun 16 Jul 2017 16:15:42 AEST | root | number  | yast sw_single        |               
pre    | 528 |       | Sun 16 Jul 2017 16:17:47 AEST | root | number  | zypp(ruby)            | important=no  
post   | 529 | 527   | Sun 16 Jul 2017 16:17:55 AEST | root | number  |                       |               
pre    | 530 |       | Sun 16 Jul 2017 16:18:43 AEST | root | number  | yast sw_single        |               
pre    | 531 |       | Sun 16 Jul 2017 16:19:35 AEST | root | number  | zypp(ruby)            | important=no  
post   | 532 | 531   | Sun 16 Jul 2017 16:19:39 AEST | root | number  |                       | important=no  
post   | 533 | 530   | Sun 16 Jul 2017 16:19:51 AEST | root | number  |                       |               
pre    | 534 |       | Sun 16 Jul 2017 22:06:18 AEST | root | number  | yast sw_single        |               
post   | 535 | 534   | Sun 16 Jul 2017 22:09:01 AEST | root | number  |                       |               
pre    | 536 |       | Mon 17 Jul 2017 17:52:46 AEST | root | number  | yast snapper          |               
linux-763v:~>

Looking in

/etc/cron.weekly

i was & still am] surprised that it was empty. Therefore i checked:

linux-763v:~> zypper info btrfsmaintenance
Loading repository data...
Reading installed packages...


Information for package btrfsmaintenance:
-----------------------------------------
Repository     : Main Repository (OSS)                        
Name           : btrfsmaintenance                             
Version        : 0.3.1-2.1                                    
Arch           : noarch                                       
Vendor         : openSUSE                                     
Installed Size : 46.3 KiB                                     
**Installed      : Yes                                          
Status         : up-to-date**                                   
Source package : btrfsmaintenance-0.3.1-2.1.src               
Summary        : Scripts for btrfs periodic maintenance tasks
Description    :                                              
    Scripts for btrfs maintenance tasks like periodic scrub, balance, trim or defrag
    on selected mountpoints or directories.

Question: How can this package be already installed yet that preceding directory be empty?

Following the referenced links i then forced its reinstallation:

linux-763v:~> sudo zypper in -f btrfsmaintenance
[sudo] password for root:  
Loading repository data...
Reading installed packages...
Forcing installation of 'btrfsmaintenance-0.3.1-2.1.noarch' from repository 'Main Repository (OSS)'.
Resolving package dependencies...

The following package is going to be reinstalled:
  btrfsmaintenance

1 package to reinstall.
Overall download size: 30.6 KiB. Already cached: 0 B. No additional space will be used or freed after the operation.
**Continue? [y/n/...? shows all options] (y): **
Retrieving package btrfsmaintenance-0.3.1-2.1.noarch                                            (1/1),  30.6 KiB ( 46.3 KiB unpacked)
Retrieving: btrfsmaintenance-0.3.1-2.1.noarch.rpm .................................................................[done (4.7 KiB/s)]
Checking for file conflicts: ..................................................................................................[done]
(1/1) Installing: btrfsmaintenance-0.3.1-2.1.noarch ...........................................................................[done]
Additional rpm output:
Updating /etc/sysconfig/btrfsmaintenance ...
Refresh script btrfs-scrub.sh for monthly
Refresh script btrfs-defrag.sh for none
Refresh script btrfs-balance.sh for weekly
Refresh script btrfs-trim.sh for none

…& was gratified to see that now btrfs-balance appeared in

/etc/cron.weekly

I confirmed that yes my / really is almost full:

linux-763v:~> sudo btrfs filesystem show /
[sudo] password for root:  
Label: none  uuid: 59c063db-fa0d-4e1e-baa2-df255f4262fb
        Total devices 1 FS bytes used 46.54GiB
        devid    1 size 50.00GiB used 49.96GiB path /dev/sda3

The first of these commands, after a lag, presumably completed ok [given no error msg], but the second one worries me:

linux-763v:~> systemctl start btrfsmaintenance-refresh.service
linux-763v:~> systemctl status btrfsmaintenance-refresh.service
● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
   Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; disabled; vendor preset: disabled)
   **Active: inactive (dead)**
linux-763v:~>

Question: WHY would it be inactive / disabled / dead, & how should i remedy this?

Before next manually running the two applicable cron-jobs to get back my free space, i inspected /etc/snapper/configs/root. Here it is, still as-found, apart from the obvious small edit i made as per another of Malcolm’s links above:



# subvolume to snapshot
SUBVOLUME="/"


# filesystem type
FSTYPE="btrfs"


# btrfs qgroup for space aware cleanup algorithms
QGROUP="1/0"


# fraction of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"


# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""


# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"


# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"


# run daily number cleanup
NUMBER_CLEANUP="yes"


# limit for number cleanup
NUMBER_MIN_AGE="1800"
#NUMBER_LIMIT="2-10"
#NUMBER_LIMIT_IMPORTANT="4-10"
# 17/7/17: I reduced the preceding as per Malcolm's https://forums.opensuse.org/showthread.php/521072-Running-out-of-space-in-root-partition?p=2800310#post2800310
NUMBER_LIMIT="2-3"
NUMBER_LIMIT_IMPORTANT="3-5"


# create hourly snapshots
TIMELINE_CREATE="no"


# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"


# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="10"
TIMELINE_LIMIT_DAILY="10"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="10"
TIMELINE_LIMIT_YEARLY="10"


# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"


# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"

Question: Should i change any other of the default settings in that file pls? [This installation is just on a personal laptop, with ~250 GB SSD].

Next i manually ran both cron-jobs:

linux-763v:~> sudo /etc/cron.daily/suse.de-snapper
[sudo] password for root:  
linux-763v:~> sudo /etc/cron.weekly/btrfs-balance
Before balance of /
Data, single: total=37.17GiB, used=14.32GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=768.00MiB, used=379.11MiB
GlobalReserve, single: total=41.16MiB, used=0.00B
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        54G   16G   38G  30% /
Done, had to relocate 0 out of 43 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=1
Done, had to relocate 5 out of 43 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=5
Done, had to relocate 8 out of 38 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=10
Done, had to relocate 0 out of 30 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 30 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=30
Done, had to relocate 2 out of 29 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=40
Done, had to relocate 0 out of 27 chunks
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=50
Done, had to relocate 1 out of 27 chunks
Done, had to relocate 0 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=1
  SYSTEM (flags 0x2): balancing, usage=1
Done, had to relocate 1 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=5
  SYSTEM (flags 0x2): balancing, usage=5
Done, had to relocate 1 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=10
  SYSTEM (flags 0x2): balancing, usage=10
Done, had to relocate 1 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=20
  SYSTEM (flags 0x2): balancing, usage=20
Done, had to relocate 1 out of 26 chunks
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=30
  SYSTEM (flags 0x2): balancing, usage=30
Done, had to relocate 1 out of 26 chunks
After balance of /
Data, single: total=20.17GiB, used=14.32GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=768.00MiB, used=377.98MiB
GlobalReserve, single: total=39.06MiB, used=0.00B
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        54G   16G   38G  30% /
linux-763v:~> 

Question: Do those outputs look ok pls?

<<continued in follow-on post>>

It seemed to have done some good:

linux-763v:~> sudo btrfs filesystem show /                      
[sudo] password for root:                      
Label: none  uuid: 59c063db-fa0d-4e1e-baa2-df255f4262fb 
        Total devices 1 FS bytes used 14.69GiB                                     
        devid    1 size 50.00GiB used 20.96GiB path /dev/sda3 
linux-763v:~> 

…with this being the new status of the Snapshots:

linux-763v:~> sudo snapper list                                 
[sudo] password for root:  
Type   | #   | Pre # | Date                          | User | Cleanup | Description           | Userdata      
-------+-----+-------+-------------------------------+------+---------+-----------------------+--------------
single | 0   |       |                               | root |         | current               |               
single | 422 |       | Tue 04 Jul 2017 18:32:26 AEST | root | number  | rollback backup of #1 | important=yes
single | 423 |       | Tue 04 Jul 2017 18:32:27 AEST | root |         |                       |               
pre    | 426 |       | Tue 04 Jul 2017 18:47:57 AEST | root | number  | zypp(zypper)          | important=yes
post   | 427 | 426   | Tue 04 Jul 2017 19:01:28 AEST | root | number  |                       | important=yes
pre    | 481 |       | Mon 10 Jul 2017 16:34:42 AEST | root | number  | zypp(zypper)          | important=yes
post   | 482 | 481   | Mon 10 Jul 2017 17:03:20 AEST | root | number  |                       | important=yes
pre    | 540 |       | Mon 17 Jul 2017 18:08:41 AEST | root | number  | yast sw_single        |               
post   | 541 | 540   | Mon 17 Jul 2017 18:09:58 AEST | root | number  |                       |               
pre    | 542 |       | Mon 17 Jul 2017 18:34:33 AEST | root | number  | zypp(zypper)          | important=no  
post   | 543 | 542   | Mon 17 Jul 2017 18:34:40 AEST | root | number  |                       | important=no  
linux-763v:~>

I do hope that the things i did tonight, per all the above, were correct… but even if true, i remain confused about the various “surprising” discoveries i made… hence would really appreciate it if my specific questions inserted above could be answered by more experienced users here pls.

I can not help you, but I can tell you that I do not trust Brtfs.
I used it one by one because it was defaulted by Opensuse.
In my opinion, btrfs should default to work as ext4, do not make any snapshots, so whoever puts it in trouble will do it … if you want, impose Default is wrong

The software engine behind openSUSE’s and SUSE Enterprise Edition’s ZYpp is Red Hat’s RPM – which has a database – which sometimes, occasionally, not very often, needs to be rebuilt.

  • Further information related to the RPM database is available from the rpmdb man (8) pages.
  • Additional information is available from Carla Schroder’s “Linux Cookbook” published by O’Reilly: ISBN: 0-596-00640-3

Regardless of whether the RPM database needs to be rebuilt or not, there are tools to verify the consistency between the RPM database, package dependencies and, the installed files and directory structures needed the packages mentioned in the database:


 > zypper verify --details
Repository 'Packman Repository' is out-of-date. You can run 'zypper refresh' as root to update it.
Loading repository data...
Reading installed packages...

Dependencies of all installed packages are satisfied.
 >

 # rpm --verify --all

Please note that, the RPM verify should be executed by the ‘root’ user and, the output is more complete than that provided by ZYpp.

Search for the state of the suspect systemd units by means of the following (normal user) CLI commands:

  1. “systemctl list-unit-files” – the output is “less” (or “more”) and, can be searched with “/” . . .
  2. "systemctl status <suspect unit>

Then, with a “root” user CLI: “systemctl enable <unit>”; “systemctl start <unit>”; “systemctl status <unit>”.
Take note of any messages (which may be errors) presented by the systemctl “start” and “status” commands.

Normally no – except if, the default really, REALLY, doesn’t fit to the concerned hardware and/or system configuration . . .

Yes.
Personally, with a 1 TB SSHD, I tend to allocate 80 GB to the Btrfs “/” partition.

Many thanks - this was all a good help.

Hopefully, all this is ok…

linux-763v:~> sudo systemctl enable btrfsmaintenance-refresh.service[sudo] password for root: 
Created symlink /etc/systemd/system/multi-user.target.wants/btrfsmaintenance-refresh.service → /usr/lib/systemd/system/btrfsmaintenance-refresh.service.


linux-763v:~> sudo systemctl start btrfsmaintenance-refresh.service
linux-763v:~> sudo systemctl status btrfsmaintenance-refresh.service
● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
   Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Tue 2017-07-18 11:57:12 AEST; 27s ago
  Process: 30044 ExecStart=/usr/share/btrfsmaintenance/btrfsmaintenance-refresh-cron.sh (code=exited, status=0/SUCCESS)
 Main PID: 30044 (code=exited, status=0/SUCCESS)


Jul 18 11:57:12 linux-763v systemd[1]: Starting Update cron periods from /etc/sysconfig/btrfsmaintenance...
Jul 18 11:57:12 linux-763v btrfsmaintenance-refresh-cron.sh[30044]: Refresh script btrfs-scrub.sh for monthly
Jul 18 11:57:12 linux-763v btrfsmaintenance-refresh-cron.sh[30044]: Refresh script btrfs-defrag.sh for none
Jul 18 11:57:12 linux-763v btrfsmaintenance-refresh-cron.sh[30044]: Refresh script btrfs-balance.sh for weekly
Jul 18 11:57:12 linux-763v btrfsmaintenance-refresh-cron.sh[30044]: Refresh script btrfs-trim.sh for none
Jul 18 11:57:12 linux-763v systemd[1]: Started Update cron periods from /etc/sysconfig/btrfsmaintenance.
linux-763v:~> 

Good.

Wow! 80 GB - crikey! Mind you, earlier today i created a new oS TW VM in my Maui Tower, specifically so that i could do some Btrfs settings experimentation before doing them “for real” on my TW Lappy. This time i decided to explore the Ruby Installer more thoroughly than i’d done before, & rather than simply selecting the Plasma Desktop directly, i selected Custom, which later in the process let me access a vast array of additional choices, from which as an experiment i enabled ALL of Plasma, Gnome, Xfce, Mate, & Enlightenment desktops… + was also able to fine-tune programs to be installed. How cool is this!!! I was super impressed & delighted, so much so that when i convert my Tower later from Maui to TW i think i’ll do the same thing. That in turn makes me suppose that i would need to make its root bigger than normal, not only due to the Btrfs [which i shall certainly use], but also for the [substantial??] extra room needed for those other DEs.

After more research in this forum i discovered that by editing /etc/sysconfig/btrfsmaintenance thus:

## Path:           System/File systems/btrfs## Description:    Configuration for periodic fstrim
## Type:           string(none,daily,weekly,monthly)
## Default:        "none"
## ServiceRestart: btrfsmaintenance-refresh
#
# Frequency of periodic trim. Off by default so it does not collide with
# fstrim.timer . If you do not use the timer, turn it on here. The recommended
# period is 'weekly'.
#BTRFS_TRIM_PERIOD="none"
**BTRFS_TRIM_PERIOD="weekly"**


## Path:        System/File systems/btrfs
## Description: Configuration for periodic fstrim - mountpoints
## Type:        string
## Default:     "/"
#
# Which mountpoints/filesystems to trim periodically.
# (Colon separated paths)
# The special word/mountpoint "auto" will evaluate all mounted btrfs filesystems at runtime
BTRFS_TRIM_MOUNTPOINTS="/"

…& then re-running:

linux-763v:~> sudo systemctl status btrfsmaintenance-refresh.service
[sudo] password for root:  
● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
   Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Tue 2017-07-18 20:35:46 AEST; 3min 58s ago
  Process: 1499 ExecStart=/usr/share/btrfsmaintenance/btrfsmaintenance-refresh-cron.sh (code=exited, status=0/SUCCESS)
 Main PID: 1499 (code=exited, status=0/SUCCESS)

Jul 18 20:35:46 linux-763v systemd[1]: Starting Update cron periods from /etc/sysconfig/btrfsmaintenance...
Jul 18 20:35:46 linux-763v btrfsmaintenance-refresh-cron.sh[1499]: Refresh script btrfs-scrub.sh for monthly
Jul 18 20:35:46 linux-763v btrfsmaintenance-refresh-cron.sh[1499]: Refresh script btrfs-defrag.sh for none
Jul 18 20:35:46 linux-763v btrfsmaintenance-refresh-cron.sh[1499]: Refresh script btrfs-balance.sh for weekly
Jul 18 20:35:46 linux-763v btrfsmaintenance-refresh-cron.sh[1499]: **Refresh script btrfs-trim.sh for weekly**
Jul 18 20:35:46 linux-763v systemd[1]: Started Update cron periods from /etc/sysconfig/btrfsmaintenance.
linux-763v:~>

…so now it appears that cron will take care of automated weekly root trims for me.

My separate /home partition, unlike root, is ext4 not btrfs, so i was wondering if i should create a new weekly cron job exclusively to trim home? My starting-point was to look at what i did in Mint KDE, & do in Maui (all ext4 partitions)] - /etc/cron.weekly/fstrim:

#!/bin/sh
# trim all mounted file systems which support it
/sbin/fstrim --all || true

So in the new case, being TW with root btrfs already taken care of, would this suffice?

#!/bin/sh
# trim ext4 home partition
/sbin/fstrim /home

When systemd reports SUCCESS, it means just that . . .

Given that, with a “/” Btrfs partition, “/var/”, “/tmp/” and “/srv/” are by default in that partition and, are being managed by Btrfs . . .
If you want everything to be ticketyboo and hunkydorey with systems which need more than a little bit of <var, tmp, srv> space, 80 GB is not at all extravagant . . .

Hi
Nooooooo… look at the output of the mount command first… the --all option (read the man page for fstrim :wink: ) will take care of any mounted filesystem if they support discard, assuming you have an SSD and it’s not blacklisted (there are a few, eg Samsung), eg;


hdparm -I /dev/sda | grep TRIM

“btrfs-trim” calls “fstrim” which, AFAIK, is only relevant for SSD drives.

“btrfs-trim” only calls “fstrim” for the drives with Btrfs partitions.
Yes, you could create weekly cron jobs which call “fstrim” for each partition in turn – having more than one call to “fstrim” active at any one point in time could be “interesting” but, is more than likely to be dangerous . . .
From a “root” user CLI call “fstrim --all --verbose” to see which partitions and or drives will be trimmed. For example, for a partition on a rotating drive on this system:


 # fstrim --verbose /home01
fstrim: /home01: the discard operation is not supported
 # 

Hi
Also what is the status of…


systemctl status fstrim.timer

Yes but in fairness, my trepidation was because as well as the reassuring word “success”, there is also:

Active: inactive (dead)

…which doesn’t sound entirely happy to me, given i’ve already Enabled it & rebooted].

Hi
Again, no need to ‘enable’ certain services, they are disabled because of a systemd timer service (the move to remove cron I suspect) for example;


systemctl status fstrim.service
● fstrim.service - Discard unused blocks
   Loaded: loaded (/usr/lib/systemd/system/fstrim.service; static; vendor preset: disabled)
   Active: inactive (dead)

 systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
   Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
   Active: active (waiting) since Tue 2017-07-18 09:25:39 CDT; 2h 13min ago
     Docs: man:fstrim

Jul 18 09:25:39 mizz-piggy systemd[1]: Started Discard unused blocks once a week.

So the service is inactive/dead, but the timer is running to trigger the service when it’s needed… if you wanted to manually trim, then you start the service it will run and go dead again, this is normal…

The systemd timers are much better than cron in that you can define the delay, for example I monitor the perf data of my cpu frequency every 10 seconds for conky, can’t do that with cron…

Yes it’s an SSD - in my initial post in this thread i wrote:

This installation is just on a personal laptop, with ~250 GB SSD]

More specifically it’s a SAMSUNG_SSD_PM810_2.5__256GB_S0N4NEAZB01960


linux-763v:~> sudo hdparm -I /dev/sda | grep TRIM
[sudo] password for root:  
           *    Data Set Management **TRIM** supported (limit unknown)
linux-763v:~> 

I’m sorry but i do not understand your criticism. I do already know that “the --all option” works, given that’s what [as i said] i used previously with Mint, & still do on my Maui Tower [Lappy is already migrated to TW, & i hope soon enough to do ditto to Tower]. Are you saying that me proposing just the /home option is wrong, or just unnecessary? I had supposed [but now i imagine i’m wrong] that given root’s trim is taken care of [per “[i]btrfsmaintenance-refresh-cron.sh[1499]: Refresh script btrfs-trim.sh for weekly”], that retaining the –all might be problematic…?

If i put it another way, i suppose you have given me hints above, but i’m too dimwitted to understand the hint. Pls could you kindly be more specific?

I can’t fathom why you mention a rotating drive, & imply doubt re SSD… in my initial post of this thread i wrote:

This installation is just on a personal laptop, with ~250 GB SSD

Wrt:

“btrfs-trim” only calls “fstrim” for the drives with Btrfs partitions.

I know that, that was my specific point; my root is btrfs, & is now taken care of [per the codeboxes i provided]. Hence i was turning my focus to my ext4 home = also on my SSD].

linux-763v:~> **sudo fstrim --all --verbose**
[sudo] password for root:  
/usr/local: 35.6 GiB (38180610048 bytes) trimmed
linux-763v:~> 

Wrt:

having more than one call to “fstrim” active at any one point in time could be “interesting” but, is more than likely to be dangerous

OK, so what i’ve now learned from you & Malcolm is that my idea is inappropriate, so i won’t do it, but now i don’t understand what i should do for weekly trims of my SSD’s ext4 /home.

It’s:

linux-763v:~> systemctl status fstrim.timer
**●** fstrim.timer - Discard unused blocks once a week
   Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
   Active: **active (waiting)** since Tue 2017-07-18 20:35:46 AEST; 6h ago
     Docs: man:fstrim

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
linux-763v:~> 

Golly, sounds a bit intimidating.

Ooooooooooh, cool, thanks!

And conky, that’s a whole other level of headache for me, still down the track a bit… I have it working nicely in Maui, but now presume i need to learn how to do it in oS. Later…

Hi
Interesting, so on my setups (I don’t use ext4) the btrfs and xfs partitions are trimmed fine. Your /home is encrypted?

So if you run;


fstrim --verbose /home

What is the output?

Again, just remember that there are issues with the Samsung 800 series SSD’s and trim support with their firmware. Does your SSD have the latest firmware?

Yes, my /home & swap] is[are] encrypted.

Oh dear me, i cannot comprehend this output; it contradicts both my earlier reply to you , & also years of this same SSD happily accepting fstrim in Mint then Maui.

linux-763v:~> sudo fstrim --verbose /home
[sudo] password for root:  
fstrim: /home: the discard operation is not supported
linux-763v:~> 

I’ve never attempted to update the SSD firmware. The Lappy is a Dell XPS-15, many years old. Anyway, all discussion wrt me seeking to weekly cron-trim my /home is now moot & redundant… hey, wait a sec, not so fast… if this SSD is alleging it now does not support fstrim for my ext4 /home, how come root’s btrfs is ok with it [or is that my wrong interpretation too]?

Re-pasting my earlier reply to you https://forums.opensuse.org/showthread.php/525998-Btrfs-Root-ran-out-of-space-via-this-forum-i-fixed-it-are-my-settings-ok-pls?p=2830413#post2830413 :

linux-763v:~> sudo hdparm -I /dev/sda | grep TRIM
[sudo] password for root:  
           *    Data Set Management **TRIM** supported (limit unknown)
linux-763v:~> 

“fstrim” is only useful for SSD hardware – for HDD and SHDD hardware (physical rotating magnetic media) “fstrim” is neither needed nor applicable.
[HR][/HR]Apropos running “fstrim” on encrypted partitions, there’s some German language information with respect to “TRIM” in the Ubuntu Wiki which uses the same solution as that proposed in the following (English language) blog posts:
<https://blog.christophersmart.com/2016/05/12/trim-on-lvm-on-luks-on-ssd-revisited/&gt;
<http://worldsmostsecret.blogspot.de/2012/04/how-to-activate-trim-on-luks-encrypted.html&gt;
Basically, it seems that, in ‘/etc/crypttab’ the “discard” option has to be set and, if for each LVM mount point also in the related ‘/etc/fstab’ entries.
You may have to add ‘rd.luks.options=discard’ to the GRUB2 kernel options line.

Again, i already know this, & twice now i have explicitly posted in this thread that i use a SSD. That’s why i am asking about trimming.
[HR][/HR]

Aha, now that is really interesting – thank you! It might succinctly explain my rhetorical question about why now a problem in TW/luks, when the same SSD was not a problem with Mint & Maui eCryptFS. I’ll look into it. Vielen dank for the pointer.