SSD settings: noatime/trim and btrfs ssd argument

It works after i ran it manually?! There were no previous records and I still wonder why counter kept resetting.

Maybe it is not active.
systemctl start fstrim.service and systemctl enable fstrim.service. From root (with su-).
With systemctl list-units you will see running units.

I did run those but not as root. It did not complain tho and it did start the timer so I’m confused.

Hi
So as root user what is the output from;


systemctl list-timers

On my 42.3 system I see;


systemctl list-timers

NEXT                         LEFT        LAST                         PASSED    UNIT                         ACTIVATES
Fri 2018-04-27 17:00:00 CDT  15min left  Fri 2018-04-27 16:00:01 CDT  44min ago snapper-timeline.timer       snapper-timeline.service
Sat 2018-04-28 00:00:00 CDT  7h left     Fri 2018-04-27 00:00:02 CDT  16h ago   logrotate.timer              logrotate.service
Sat 2018-04-28 00:02:08 CDT  7h left     Fri 2018-04-27 01:45:35 CDT  14h ago   backup-sysconfig.timer       backup-sysconfig.service
Sat 2018-04-28 00:25:20 CDT  7h left     Fri 2018-04-27 00:42:26 CDT  16h ago   check-battery.timer          check-battery.service
Sat 2018-04-28 00:36:48 CDT  7h left     Fri 2018-04-27 01:47:03 CDT  14h ago   backup-rpmdb.timer           backup-rpmdb.service
Sat 2018-04-28 10:37:01 CDT  17h left    Fri 2018-04-27 10:37:01 CDT  6h ago    snapper-cleanup.timer        snapper-cleanup.service
Sat 2018-04-28 10:42:01 CDT  17h left    Fri 2018-04-27 10:42:01 CDT  6h ago    systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Mon 2018-04-30 00:00:00 CDT  2 days left Thu 2018-04-26 21:44:31 CDT  18h ago   btrfs-balance.timer          btrfs-balance.service
Mon 2018-04-30 00:00:00 CDT  2 days left Thu 2018-04-26 21:44:31 CDT  18h ago   fstrim.timer                 fstrim.service
Tue 2018-05-01 00:00:00 CDT  3 days left Thu 2018-04-26 21:44:31 CDT  18h ago   btrfs-scrub.timer            btrfs-scrub.service


**linux-f0hw:/home/georgi #** systemctl list-timers  
NEXT                          LEFT           LAST                          PASSED       UNIT         
Sat 2018-04-28 22:51:42 EEST  21h left       Fri 2018-04-27 22:51:42 EEST  2h 10min ago systemd-tmpf
Mon 2018-04-30 00:00:00 EEST  1 day 22h left Mon 2018-04-23 04:09:56 EEST  4 days ago   fstrim.timer

**2 timers listed.**


but then it also shows


georgi@linux-f0hw:~> systemctl status fstrim.timer  
**●** fstrim.timer - Discard unused blocks once a week
   Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
   Active: **active (waiting)** since Fri 2018-04-27 22:36:25 EEST; 2h 27min ago
     Docs: man:fstrim


@georgi7
I see that now it works (timers indicates that).
I think that you have problems because you did not write the commands with (“su -” or “sudo”).
All the best.

So yes, it does seems to work once a week now. However after fstrim operation I see this:


georgi@linux-f0hw:~> iostat -m
Linux 4.4.126-48-default (linux-f0hw)   04/30/2018      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          11.69    0.00    1.50    1.37    0.00   85.44

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdc               0.00         0.00         0.00          8          0
sdb             166.54         0.01         1.21        592      56079
sda               1.66         0.06         2.00       2557      92640


SDA is the ssd drive.

Why after fstrim there is around 90GB of write to the SSD. Makes no sense. I hope its miss reading because otherwise its pretty bad.

On Mon 30 Apr 2018 01:56:03 AM CDT, georgi7 wrote:

So yes, it does seems to work once a week now. However after fstrim
operation I see this:

Code:

georgi@linux-f0hw:~> iostat -m
Linux 4.4.126-48-default (linux-f0hw) 04/30/2018
x86_64 (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
11.69 0.00 1.50 1.37 0.00 85.44

Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sdc 0.00 0.00 0.00 8 0
sdb 166.54 0.01 1.21 592 56079
sda 1.66 0.06 2.00 2557 92640


SDA is the ssd drive.

Why after fstrim there is around 90GB of write to the SSD. Makes no
sense. I hope its miss reading because otherwise its pretty bad.

Hi
That’s a total across the whole disk… what about /dev/sda2 (your /)


cat /proc/diskstats
iostat -mh /dev/sda2


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
Tumbleweed - 20180425 | GNOME Shell 3.28.1 | 4.16.3-1-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

… and cumulative total since the system was booted.

In any case, I do not see how TRIM would be related to amount of data written and what exactly this output is supposed to tell.

Stat is gone since reboot, but its definitely caused by trim since i noticed it before when the service was run manually. Trim reports something like this:


eorgi@linux-f0hw:~> sudo journalctl -u fstrim
[sudo] password for root:  
-- Logs begin at Sun 2018-04-22 04:13:40 EEST, end at Mon 2018-04-30 16:35:06 EEST. --
Apr 27 22:43:49 linux-f0hw systemd[1]: Starting Discard unused blocks...
Apr 27 22:43:59 linux-f0hw fstrim[4414]: /boot/grub2/i386-pc: 89.2 GiB (95820156928 bytes) trimmed
Apr 27 22:43:59 linux-f0hw systemd[1]: Started Discard unused blocks.
Apr 27 23:03:48 linux-f0hw systemd[1]: Starting Discard unused blocks...
Apr 27 23:03:49 linux-f0hw fstrim[4874]: /boot/grub2/i386-pc: 88.9 GiB (95473078272 bytes) trimmed
Apr 27 23:03:49 linux-f0hw systemd[1]: Started Discard unused blocks.
**-- Reboot --**
Apr 30 00:00:01 linux-f0hw systemd[1]: Starting Discard unused blocks...
Apr 30 00:00:05 linux-f0hw fstrim[12918]: /boot/grub2/x86_64-efi: 89.5 GiB (96102027264 bytes) trim
Apr 30 00:00:05 linux-f0hw systemd[1]: Started Discard unused blocks.


thats about 89GB average and root is about 13GB out of 100GB partition which means those 89GB is awful close to the amount of free space on that partition. On normal usage of my PC even extended one for 10-12h, there is between 1-2GB written generated to sda (unless there are some updates available which might bump that a little higher). On few days back when we run the trim service manually, twice in a row it did measured about 190GB written to sda. SDA contains two things - 100GB /root and 4GB SWAP while /tmp is set to be in ram as tmpfs and /home is on sdb. There is just nothing else to cause that kind of write and numbers out of trim kind of match those values. If those numbers are true it would be like - my normal PC usage is causing 20% of the wear of the ssd, and the other 80% is caused by fstrim which would be ridiculous.

On Mon 30 Apr 2018 01:46:03 PM CDT, georgi7 wrote:

Stat is gone since reboot, but its definitely caused by trim since i
noticed it before when the service was run manually. Trim reports
something like this:

Code:

eorgi@linux-f0hw:~> sudo journalctl -u fstrim
[sudo] password for root:
– Logs begin at Sun 2018-04-22 04:13:40 EEST, end at Mon 2018-04-30
16:35:06 EEST. – Apr 27 22:43:49 linux-f0hw systemd[1]: Starting
Discard unused blocks… Apr 27 22:43:59 linux-f0hw
fstrim[4414]: /boot/grub2/i386-pc: 89.2 GiB (95820156928 bytes) trimmed
Apr 27 22:43:59 linux-f0hw systemd[1]: Started Discard unused blocks.
Apr 27 23:03:48 linux-f0hw systemd[1]: Starting Discard unused
blocks… Apr 27 23:03:49 linux-f0hw fstrim[4874]: /boot/grub2/i386-pc:
88.9 GiB (95473078272 bytes) trimmed Apr 27 23:03:49 linux-f0hw
systemd[1]: Started Discard unused blocks. – Reboot – Apr 30
00:00:01 linux-f0hw systemd[1]: Starting Discard unused blocks… Apr
30 00:00:05 linux-f0hw fstrim[12918]: /boot/grub2/x86_64-efi: 89.5 GiB
(96102027264 bytes) trim Apr 30 00:00:05 linux-f0hw systemd[1]: Started
Discard unused blocks.

thats about 89GB average and root is about 13GB out of 100GB partition
which means those 89GB is awful close to the amount of free space on
that partition. On normal usage of my PC even extended one for 10-12h,
there is between 1-2GB written generated to sda (unless there are some
updates available which might bump that a little higher). On few days
back when we run the trim service manually, twice in a row it did
measured about 190GB written to sda. SDA contains two things - 100GB
/root and 4GB SWAP while /tmp is set to be in ram as tmpfs and /home is
on sdb. There is just nothing else to cause that kind of write and
numbers out of trim kind of match those values. If those numbers are
true it would be like - my normal PC usage is causing 20% of the wear of
the ssd, and the other 80% is caused by fstrim which would be
ridiculous.

Hi
What does the output from smartctl -a /dev/sda say about disk lifetime
read/writes?

My systems write ~5GB to the disk a day… well under the SSD specs of
20GB a day… not had an SSD wear out yet…

Use the -p option to see each partition;


iostat -mhp /dev/sda


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
SLES 15 RC3 | GNOME Shell 3.26.2 | 4.12.14-16-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Would you clarify how you calculate this? Is it based on LBA written divided by Power on hours? If so, how do y find the LBA size? Here I see:

# smartctl -a /dev/sda
smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.4.126-48-default] (SUSE RPM)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Samsung based SSDs
Device Model:     Samsung SSD 850 EVO 500GB
Serial Number:    S3PTNB0J902531B
LU WWN Device Id: 5 002538 d423a24a8
Firmware Version: EMT03B6Q
User Capacity:    500.107.862.016 bytes [500 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Apr 30 15:22:34 2018 -03
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
...
SMART Attributes Data Structure revision number: 1                                                                  
Vendor Specific SMART Attributes with Thresholds:                                                                   
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE                    
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0                            
  9 **Power_On_Hours**          0x0032   099   099   000    Old_age   Always       -      ** 2410**                         
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       60                           
177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       1                            
179 Used_Rsvd_Blk_Cnt_Tot   0x0013   100   100   010    Pre-fail  Always       -       0                            
181 Program_Fail_Cnt_Total  0x0032   100   100   010    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x0032   100   100   010    Old_age   Always       -       0
183 Runtime_Bad_Block       0x0013   100   100   010    Pre-fail  Always       -       0
187 Uncorrectable_Error_Cnt 0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0032   065   056   000    Old_age   Always       -       35
195 ECC_Error_Rate          0x001a   200   200   000    Old_age   Always       -       0
199 CRC_Error_Count         0x003e   100   100   000    Old_age   Always       -       0
235 POR_Recovery_Count      0x0012   099   099   000    Old_age   Always       -       7
241 **Total_LBAs_Written**      0x0032   099   099   000    Old_age   Always       -       **1875995429**

SMART Error Log Version: 1
No Errors Logged
...

Thanks!

On Mon 30 Apr 2018 06:36:03 PM CDT, brunomcl wrote:

malcolmlewis;2864131 Wrote:
> Hi
> What does the output from smartctl -a /dev/sda say about disk lifetime
> read/writes?
>
> My systems write ~5GB to the disk a day…

Would you clarify how you calculate this? Is it based on LBA written
divided by Power on hours? If so, how do y find the LBA size? Here I
see:
<snip>

Hi
Yes, so the calculation for your SSD is;

((1875995429 x 512) / 1024 /1024/1024)/2410 = 0.371 GB/hr x 24 =
8.9GB per day

You need to check the manufacturer specs if it’s an unknown attribute…

There are online calculators, eg;


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
SLES 15 RC3 | GNOME Shell 3.26.2 | 4.12.14-16-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Ah, so what Samsung/SMART calls LBA (logical block addressing) = block size in this case.

Thank you Malcolm!

I dont see lifetime here


smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.4.126-48-default] (SUSE RPM)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     TS128GSSD230S
Serial Number:    010642B5E22794500152
Firmware Version: Q0518F3S
User Capacity:    128,035,676,160 bytes [128 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 (minor revision not indicated)
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Tue May  1 04:53:10 2018 EEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever  
                                        been run.
Total time to complete Offline  
data collection:                (    0) seconds.
Offline data collection
capabilities:                    (0x71) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0002) Does not save SMART data before
                                        entering power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine  
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        (  10) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x0000   100   100   000    Old_age   Offline      -       0
  5 Reallocated_Sector_Ct   0x0000   100   100   000    Old_age   Offline      -       0
  9 Power_On_Hours          0x0000   100   100   000    Old_age   Offline      -       174
 12 Power_Cycle_Count       0x0000   100   100   000    Old_age   Offline      -       21
160 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       0
161 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       35
163 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       10
164 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       166
165 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       9
166 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       0
167 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       0
148 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       601
149 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       40
150 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       0
151 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       17
159 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       0
168 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       1500
169 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       100
177 Wear_Leveling_Count     0x0000   100   100   050    Old_age   Offline      -       0
181 Program_Fail_Cnt_Total  0x0000   100   100   000    Old_age   Offline      -       0
182 Erase_Fail_Count_Total  0x0000   100   100   000    Old_age   Offline      -       0
192 Power-Off_Retract_Count 0x0000   100   100   000    Old_age   Offline      -       1
194 Temperature_Celsius     0x0000   100   100   000    Old_age   Offline      -       40
195 Hardware_ECC_Recovered  0x0000   100   100   000    Old_age   Offline      -       18
196 Reallocated_Event_Count 0x0000   100   100   016    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0000   100   100   050    Old_age   Offline      -       0
232 Available_Reservd_Space 0x0000   100   100   000    Old_age   Offline      -       100
241 Total_LBAs_Written      0x0000   100   100   000    Old_age   Offline      -       980
242 Total_LBAs_Read         0x0000   100   100   000    Old_age   Offline      -       2738
245 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      -       1494

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
    6        0    65535  Read_scanning was never started
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


SWAP gets like 1MB written so basically everything written to sda that you see on previous reports is all been written to /root

Hi
I configure swap down so it uses ram up in /etc/sysctl.conf…


# Disable swap
vm.swappiness = 1
vm.vfs_cache_pressure = 50

It’s attribute 241 Total_LBAs_Written for your device, but you need to check with the manufacturer because 980 isn’t correct (it may be but may need a multiplier)…?

Thanks Malcolm. Good to know.
I have a Samsung 960 evo. With a nvme things are simplier.

smartctl -a /dev/nvme0n1
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.16.4-1-default] (SUSE RPM)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       Samsung SSD 960 EVO 250GB
Serial Number:                      S3ESNX0JB30991T
Firmware Version:                   3B7QCXE7
PCI Vendor/Subsystem ID:            0x144d
IEEE OUI Identifier:                0x002538
Total NVM Capacity:                 250,059,350,016 [250 GB]
Unallocated NVM Capacity:           0
Controller ID:                      2
Number of Namespaces:               1
Namespace 1 Size/Capacity:          250,059,350,016 [250 GB]
Namespace 1 Utilization:            38,281,891,840 [38.2 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            002538 5b71b1216c
Local Time is:                      Tue May  1 07:29:01 2018 EEST
Firmware Updates (0x16):            3 Slots, no Reset required
Optional Admin Commands (0x0007):   Security Format Frmw_DL
Optional NVM Commands (0x001f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat
Maximum Data Transfer Size:         512 Pages
Warning  Comp. Temp. Threshold:     77 Celsius
Critical Comp. Temp. Threshold:     79 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     6.04W       -        -    0  0  0  0        0       0
 1 +     5.09W       -        -    1  1  1  1        0       0
 2 +     4.08W       -        -    2  2  2  2        0       0
 3 -   0.0400W       -        -    3  3  3  3      210    1500
 4 -   0.0050W       -        -    4  4  4  4     2200    6000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0x1)
Critical Warning:                   0x00
Temperature:                        31 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    0%
**Data Units Read:                    1,279,185 [654 GB]**
**Data Units Written:                 1,838,532 [941 GB]**
Host Read Commands:                 21,443,472
Host Write Commands:                19,915,474
Controller Busy Time:               69
Power Cycles:                       271
Power On Hours:                     97
Unsafe Shutdowns:                   28
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               31 Celsius
Temperature Sensor 2:               40 Celsius

Error Information (NVMe Log 0x01, max 64 entries)
No Errors Logged

PS. There is an error. For sure I did not have only 97 power on hours. :slight_smile:

Does that looks a lot or … ?!

I have only 4GB of ram so I will keep swap on as an option, but i have it set to have swappiness very low

Hi
You would need to ask the SSD manufacturer, send an email to their support with the smartctl output. Perhaps check for a firmware update as well…

I did investigate iostat command a little bit. It seems the measurement of data is actually guessed, by measuring time the device have been busy and the average known speeds for that same device. SSD’s always been slow when it comes to real actual deletion of blocks of information so that might be the reason measurement of data written when trim is performed to be totally off and unrealistic.