Fresh install of Tumbleweed KDE on an intel 660p 1TB ssd. Is it necessary to enable trim on openSUSE? I have always done this on my other disto installs with:
sudo systemctl enable fstrim.timer
I’d like to maximize the lifespan of my ssd. Thanks
Fresh install of Tumbleweed KDE on an intel 660p 1TB ssd. Is it necessary to enable trim on openSUSE? I have always done this on my other disto installs with:
sudo systemctl enable fstrim.timer
I’d like to maximize the lifespan of my ssd. Thanks
Check with
systemctl status fstrim.timer
Personally, I disable it and just run the command (fstrim -Av) once a week.
systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Sun 2019-09-01 13:18:34 EDT; 1h 13min ago
Trigger: Mon 2019-09-02 00:00:00 EDT; 9h left
Docs: man:fstrim
Does the above indicate that it is enabled on my system? Thanks
Yep. Here’s mine:
● fstrim.timer - Discard unused blocks once a week
Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; disabled; vendor preset: enabled)
Active: inactive (dead)
Trigger: n/a
Docs: man:fstrim
I’ve been on SSD’s for as long as they’re there. And never did any of the things from ‘the internet’ thanks to Greg KH’s “kernel and systemd should take care of this”. Many people hack on their systems resulting in manually created maintenance where automated was already present. To add: none of the SSD’s died from wearing. In fact, apart from 2 that had a broken controller all of them are still alive.
Hi
Likewise, the only thing I do is switch i/o scheduler to the bfq group (mq-deadline for SSD’s) by adding scsi_mod.use_blk_mq=1 to grub kernel boot options.
Thanks! Very helpful. openSUSE continues to impress with it’s default settings.
Appreciate the insight. I’m realizing that I don’t have to do anything to my system as a new openSUSE user. Very efficient.
Always learn something new from you Malcolm! lol!
I’m one of those. I’d much rather start a data drive maintenance process when I know I’m not going to be using that drive, and I don’t intend on shutting down my computer. The automated approach interfered with my boot and shutdown on multiple occasions. The extra effort involved with the manual approach is insignificant.
Is this specific for BTRFS or also usable on systems with EXT4-only?
Still trying to figure out what killed my 2-year old Intel SSD with wear-out of 90-something…
Hi
AFAIK it should be useable, have a read here, bottom of the page;
https://doc.opensuse.org/documentation/leap/tuning/html/book.sle.tuning/cha.tuning.io.html
A google on ext4+bfq should provide other sources…
You might want to go direct to Intel support about that as should still be under warranty, got the latest firmware etc?
This does nothing on Tumbleweed. Current kernel switched to multi-queue completely.
All my 64bit OS filesystems are on EXT4. This is from 15.1:
# systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Thu 2019-08-22 00:36:15 EDT; 1 weeks 4 days ago
Trigger: Mon 2019-09-09 00:00:00 EDT; 6 days left
Docs: man:fstrim
Yes, it is enabled. However you may want to check what
fstrim.service actually does:
erlangen:~ # journalctl -b -u fstrim.service
-- Logs begin at Fri 2019-08-16 13:07:00 CEST, end at Mon 2019-09-02 12:55:40 CEST. --
Sep 01 19:49:10 erlangen systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Sep 01 19:50:26 erlangen fstrim[877]: /ArchLinux: 23.3 GiB (25054887936 bytes) trimmed on /dev/sdb2
Sep 01 19:50:26 erlangen fstrim[877]: /boot/efi: 84 MiB (88059904 bytes) trimmed on /dev/nvme0n1p4
Sep 01 19:50:26 erlangen fstrim[877]: /home-SSD: 132.7 GiB (142448631808 bytes) trimmed on /dev/sdb4
Sep 01 19:50:26 erlangen fstrim[877]: /Tumbleweed-SSD: 11.9 GiB (12767350784 bytes) trimmed on /dev/sdb3
Sep 01 19:50:26 erlangen fstrim[877]: /: 12 GiB (12861919232 bytes) trimmed on /dev/nvme0n1p2
Sep 01 19:50:26 erlangen systemd[1]: fstrim.service: Succeeded.
Sep 01 19:50:26 erlangen systemd[1]: Started Discard unused blocks on filesystems from /etc/fstab.
Sep 02 07:42:11 erlangen systemd[1]: Starting Discard unused blocks on filesystems from /etc/fstab...
Sep 02 07:42:19 erlangen fstrim[4312]: /ArchLinux: 0 B (0 bytes) trimmed on /dev/sdb2
Sep 02 07:42:19 erlangen fstrim[4312]: /boot/efi: 84 MiB (88059904 bytes) trimmed on /dev/nvme0n1p4
Sep 02 07:42:19 erlangen fstrim[4312]: /home-SSD: 0 B (0 bytes) trimmed on /dev/sdb4
Sep 02 07:42:19 erlangen fstrim[4312]: /Tumbleweed-SSD: 0 B (0 bytes) trimmed on /dev/sdb3
Sep 02 07:42:19 erlangen fstrim[4312]: /: 1.3 GiB (1357172736 bytes) trimmed on /dev/nvme0n1p2
Sep 02 07:42:19 erlangen systemd[1]: fstrim.service: Succeeded.
Sep 02 07:42:19 erlangen systemd[1]: Started Discard unused blocks on filesystems from /etc/fstab.
erlangen:~ #
Firmware I check about once a year. Was up-to-date…
I never hand over used HDD/SSD (defective) to ANYBODY. Never. And it’s usually a pain to get away with a signed piece of paper that you can’t hand over due to blah, blah, blah or sending in only the frame (HDD) without controller/disks… So I will have to find out myself. Is it possible that doing no fstrim for 2 years kills of an Intel 120 GB SSD?
I find the output of fstrim not very helpful (fstrim -v / and fstrim -v /home), sometimes it takes 2 sec and it says it has trimmed 26.7 GB, sometimes it take 30 sec and it says it has trimmed 2.5 GB. This all makes only very limited sense to me…
Hi
I’m referring to sending them an email to support and ask about your issue, maybe they will have some info?
Also the ATTRIBUTE_NAME your seeing is VALUE or RAW_VALUE? Can you post the outputs your concerned about?
It’s the Intel SSD which recent went into read-only (had a TW thread here), had been running only TW for less than 2 years (24/7) as a workstation (no server, no extensive read/writes). WEAR-OUT indicator is 83… I never did an fstrim (manually).
sudo smartctl -a /dev/sdi
smartctl 7.0 2019-05-21 r4917 [x86_64-linux-5.2.10-1-default] (SUSE RPM)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Intel 53x and Pro 1500/2500 Series SSDs
Device Model: INTEL SSDSC2BW120H6
Serial Number: CVTRxxxxxxxxxxxxxxx
LU WWN Device Id: 5 5xxxxxxx 14xxxxxxx
Firmware Version: RG21
User Capacity: 120,034,123,776 bytes [120 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-3 (minor revision not indicated)
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Mon Sep 2 19:50:09 2019 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART Status not supported: Incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 2930) seconds.
Offline data collection
capabilities: (0x7f) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Abort Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 58) minutes.
Conveyance self-test routine
recommended polling time: ( 4) minutes.
SCT capabilities: (0x0025) SCT Status supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0032 100 100 000 Old_age Always - 0
9 Power_On_Hours_and_Msec 0x0032 100 100 000 Old_age Always - 14730h+00m+00.000s
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 367
170 Available_Reservd_Space 0x0033 081 100 010 Pre-fail Always - 0
171 Program_Fail_Count 0x0032 100 100 000 Old_age Always - 0
172 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0
174 Unexpect_Power_Loss_Ct 0x0032 100 100 000 Old_age Always - 167
183 SATA_Downshift_Count 0x0032 100 100 000 Old_age Always - 11
184 End-to-End_Error 0x0033 100 100 090 Pre-fail Always - 0
187 Uncorrectable_Error_Cnt 0x0032 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0032 025 100 000 Old_age Always - 25 (Min/Max 13/39)
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 167
199 UDMA_CRC_Error_Count 0x0032 100 100 000 Old_age Always - 0
225 Host_Writes_32MiB 0x0032 100 100 000 Old_age Always - 47642
226 Workld_Media_Wear_Indic 0x0032 100 100 000 Old_age Always - 65535
227 Workld_Host_Reads_Perc 0x0032 100 100 000 Old_age Always - 40
228 Workload_Minutes 0x0032 100 100 000 Old_age Always - 65535
232 Available_Reservd_Space 0x0033 081 100 010 Pre-fail Always - 0
233 Media_Wearout_Indicator 0x0032 083 100 000 Old_age Always - 0
241 Host_Writes_32MiB 0x0032 100 100 000 Old_age Always - 47642
242 Host_Reads_32MiB 0x0032 100 100 000 Old_age Always - 31544
249 NAND_Writes_1GiB 0x0032 100 100 000 Old_age Always - 32467
Hi
So the only issue is it went read only at some point, so you have been running it for two years, I would expect that you will get at least another > 8 years based on that value, it counts down the NAND media cycles. Like I said before you get concerned, contact the Intel folks to explain what those attributes and values mean…
Hi and thanks for having a look. But read-only is irreversible for this kind of storage, or?
Will have a look at the Intel service…