Quite some years ago I landed in a yes/no discussion re. fstrim, manual maintenance of SSD’s, mainly to avoid the much feared limited writes. Greg KH ( kernel maintainer ) stated clearly that there is no need for that, first because the system should ( and does ) take care of it, second because according to him the fear for the wearing of SSD’s was completely unnecessary. The latter meanwhile has proven him right, long term tests did not kill SSDs in no time, it took years of abusing them. On the first: from my first SSD ( I was one of the first to replace slow laptop HDDs by - at the time expensive - SSDs ) on I have treated SSDs no different than HDDs.
Don’t you think it would be a flaw in linux if such tweaks would still be needed to save hardware. Nah, linux is too good for that.
And yes, I know of all these tips and tricks sites re. SSD usage, asked some questions but never got a reply that made me change my mind.
Thanks Knurpht. I have no technical basis on which to dispute your information, thus you’ll probably think the following rather silly…
More for the academic exercise now than anything else, in order to satisfy my curiosity about whether i can solve the puzzle of making fstrim become available in TW for a luks-encrypted ext4 partition, I’ll persist just a little longer on this experiment.
OK, but that’s another story, this kind of experiments have taught me a lot in the past, when I still had some old hardware to such things with/to :). Hope you have backups of everything dear or important.
So the final piece of the puzzle is how to automate this?
Does the fstrim.timer do this for me already, or do i have to initiate it somehow… or instead do i need to write a bash script to trim home & place it into /etc/cron.weekly & if so, what about dcurtisfra’s concern that potentially having the btrfs root trim & home trim running simultaneously via cron.weekly might be dangerous?
Hi
What is the output from the command lsblk, the mount name is a bit
funky… /opt == /
–
Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE Leap 42.2|GNOME 3.20.2|4.4.74-18.20-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!
Thank you… but once more sorry i’m struggling to understand [albeit by the [i]end of my post i might finally have seen the light…?].
Yes, when i did the installation i certainly did organise Lappy’s partitions such that sda3 was root, with fs of btrfs.
But why is it showing up here as /var/crash & not simply / ?
Given that now as of a few days ago my Tower is also running TW, i checked its lsblk, & was surprised/confused/cranky to see that it also reports not as root but as something else again *:
linux-3l20:~> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 156M 0 part /boot/efi
├─**sda2 8:2 0 60G 0 part /var/spool**
└─sda3 8:3 0 160G 0 part
└─cr_home 254:1 0 160G 0 crypt /home
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 4G 0 part
│ └─cr_swap 254:0 0 4G 0 crypt [SWAP]
├─sdb2 8:18 0 36G 0 part /SeagateSpare
└─sdb3 8:19 0 1.8T 0 part /Seagate
sr0 11:0 1 1024M 0 rom
linux-3l20:~>
I note that in both Lappy & Tower now, if i run df -h, i do see the expected root association… along with a list of other stuff [including the specific item named in each PC’s respective [i]lsblk*]:
Aha… [desperate leap of deductive guesswork follows]… do these results possibly mean that the other day/night, the only thing needing to be trimmed within root was /opt, whereas today only /var/crash needs trimming & if that logic is valid, then next week it might be something else again within root]?
I have not yet set up trimming in Tower, but will do that today, then observe the outcome.
On Tue 25 Jul 2017 04:26:01 AM CDT, GooeyGirl wrote:
Thank you… but once more sorry i’m struggling to understand [albeit by
the -end- of my post i -might- finally have seen the light…?].
Oh, exactly as you predicted. -[scratches head…]-
<snip>
Oh, exactly as you predicted. -[scratches head…]-
Aha… -[desperate leap of deductive guesswork follows]-… do these
results possibly mean that the other day/night, the -only- thing needing
to be trimmed within root was -/opt-, whereas today only -/var/crash-
needs trimming & if that logic is valid, then next week it might be
something else again within root]?
Hi
No, like I said a bug in the way fstrim (lsblk and probably others)
handles the output of the mountpoint (subvolume) it’s acting (as in
trimming) on /dev/sda3 aka / just telling you the LAST /etc/fstab entry
for sda3.
Run the command mount then lsblk commands and your last sda3 entry will
be the same mountpoint.
Nothing to worry about, for example df command doesn’t work properly
with btrfs either…
–
Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE Leap 42.2|GNOME 3.20.2|4.4.74-18.20-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!
Here’s Tower’s results, having now apparently successfully also setup fstrim for its /home [although, maybe i didn’t need to apply that special method given unlike on Lappy [[i]ext4] i created my Tower’s new TW /home as xfs, & i recall earlier you said that your one trims fine].
linux-Tower:~> **sudo fstrim --verbose --all**
[sudo] password for root:
/home: 139.6 GiB (149880913920 bytes) trimmed
/boot/grub2/i386-pc: 48.4 GiB (51966222336 bytes) trimmed
linux-Tower:~> **lsblk**
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 156M 0 part /boot/efi
├─sda2 8:2 0 60G 0 part /boot/grub2/i386-pc
└─sda3 8:3 0 160G 0 part
└─cr_home 254:1 0 160G 0 crypt /home
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 4G 0 part
│ └─cr_swap 254:0 0 4G 0 crypt [SWAP]
├─sdb2 8:18 0 36G 0 part /SeagateSpare
└─sdb3 8:19 0 1.8T 0 part /Seagate
sr0 11:0 1 1024M 0 rom
linux-Tower:~>
Gosh, lsblk for root really is some fine random number generator, isn’t it, teehee.
EDIT: Ah, we posted at about the same time, so i’ve only just now seen your reply.
Well, you told me before, but in all honesty i did not understand what your “cosmetic bug” expression actually meant. Anyway, i will satisfy myself from now on with the approach that as long as i see “/”, irrespective of whether anything follows after, all i need to take away from it is that yes, root is being trimmed & now so is /home].
Just to revisit this – the green text – i have just realised following a review of this whole thread, that the other day in Lappy & tonight also in Tower, i made this edit in /etc/sysconfig/btrfsmaintenance:
# Frequency of periodic trim. Off by default so it does not collide with# fstrim.timer . If you do not use the timer, turn it on here. The recommended
# period is 'weekly'.
#BTRFS_TRIM_PERIOD="none"
**BTRFS_TRIM_PERIOD="weekly"**
So, given the #comment line warns of a possible collision, & given that you have confirmed to me that my Timer is on, & will automatically take care of the trimming, my [red font] edit is wrong, isn’t it, & needs to be deleted…?
linux-Tower:~> systemctl status fstrim.timer ● fstrim.timer - Discard unused blocks once a week
Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Tue 2017-07-25 20:46:12 AEST; 1h 30min ago
Docs: man:fstrim
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
linux-Tower:~>
Thank you for letting me know. I’ll review those pages as time permits. Re data backups, i use BackInTime & have been most happy with it… but i’m always interested to find better backup tools if available [for me the progression has been [i]Areca Backup –> luckyBackup –> BiT…].
Thanks. I did [or undid] it on both PCs, but this is from Tower after first editing /etc/sysconfig/btrfsmaintenance again to delete my superfluous red-font as per earlier post, & uncomment-out the “none” line]:
linux-Tower:~> **systemctl start btrfsmaintenance-refresh.service**
linux-Tower:~> **systemctl status btrfsmaintenance-refresh.service**
● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Tue 2017-07-25 23:49:33 AEST; 25s ago
Process: 14940 ExecStart=/usr/share/btrfsmaintenance/btrfsmaintenance-refresh-cron.sh (code=exited, status=0/SUCCESS)
Main PID: 14940 (code=exited, status=0/SUCCESS)
linux-Tower:~> **sudo systemctl status btrfsmaintenance-refresh.service**
● btrfsmaintenance-refresh.service - Update cron periods from /etc/sysconfig/btrfsmaintenance
Loaded: loaded (/usr/lib/systemd/system/btrfsmaintenance-refresh.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Tue 2017-07-25 23:49:33 AEST; 1min 2s ago
Process: 14940 ExecStart=/usr/share/btrfsmaintenance/btrfsmaintenance-refresh-cron.sh (code=exited, status=0/SUCCESS)
Main PID: 14940 (code=exited, status=0/SUCCESS)
Jul 25 23:49:33 linux-Tower systemd[1]: Starting Update cron periods from /etc/sysconfig/btrfsmaintenance...
Jul 25 23:49:33 linux-Tower btrfsmaintenance-refresh-cron.sh[14940]: Refresh script btrfs-scrub.sh for monthly
Jul 25 23:49:33 linux-Tower btrfsmaintenance-refresh-cron.sh[14940]: Refresh script btrfs-defrag.sh for none
Jul 25 23:49:33 linux-Tower btrfsmaintenance-refresh-cron.sh[14940]: Refresh script btrfs-balance.sh for weekly
Jul 25 23:49:33 linux-Tower btrfsmaintenance-refresh-cron.sh[14940]: **Refresh script btrfs-trim.sh for none**
Jul 25 23:49:33 linux-Tower systemd[1]: Started Update cron periods from /etc/sysconfig/btrfsmaintenance.
linux-Tower:~>
After that i visually confirmed in Dolphin that /etc/cron.weekly now once again only contains btrfs-balance, not also btrfs-trim anymore.
Re your remark –
should be all good to go now with your base system setup
…that’s also what i thought as well. On Lappy it seems to be true, but i’ve just discovered an anomaly on Tower that i fixed a day ago but is now wrong again. I’ll create a subsequent post in this thread for it, for cleanliness.
…but the middle of the story is confusing & contradictory. I had actually used the identical file contents on Lappy & Tower, & as i showed before Lappy had no hassles acknowledging that its Swappiness really had reduced to 1. But as i also showed before, that identical file in Tower [which a couple of days ago also resulted in post-reboot “1”], yesterday had reverted to 60… with [to say it again], the identical file.
Nevertheless i followed your hint accurately today & deleted everything in Tower’s version of the file after the “0” of
vm.vfs_cache_pressure=50
, rebooted, & voila now again result is “1”. However [repeating again], the other day it also was good, then went bad, so i wonder what it will do tomorrow… For now i have not bothered to edit that file in Lappy, as it still already generates “1”. Weird.