mrmazda
January 18, 2021, 7:00pm
#1
Silicon Power Ace A55 2.5" 256GB SATA III 3D TLC Internal Solid State Drive (SSD) SU256GBSS3A55S25NB
https://www.enostech.com/silicon-power-ace-a55-256gb-ssd-review/
ara88:~ # fdisk -l
Disk /dev/sda: 238.49 GiB, 256060514304 bytes, 500118192 sectors
Disk model: SPCCSolidStateDi
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5B330B05-5C88-4DF5-8359-67FF3E52CDC4
Device Start End Sectors Size Type
/dev/sda1 2048 657407 655360 320M EFI System
/dev/sda2 657408 4245503 3588096 1.7G Linux swap
/dev/sda3 4245504 5064703 819200 400M Linux filesystem
/dev/sda4 5064704 13256703 8192000 3.9G Linux filesystem
/dev/sda5 13256704 26363903 13107200 6.3G Linux filesystem
/dev/sda6 26363904 52168703 25804800 12.3G Linux filesystem
/dev/sda7 52168704 68552703 16384000 7.8G Linux filesystem **# Tumbleweed**
/dev/sda8 68552704 84936703 16384000 7.8G Linux filesystem
/dev/sda9 84936704 101320703 16384000 7.8G Linux filesystem
/dev/sda10 101320704 117704703 16384000 7.8G Linux filesystem
/dev/sda11 117704704 134088703 16384000 7.8G Linux filesystem
/dev/sda12 134088704 150472703 16384000 7.8G Linux filesystem
/dev/sda13 150472704 166856703 16384000 7.8G Linux filesystem
/dev/sda14 166856704 183240703 16384000 7.8G Linux filesystem
/dev/sda15 183240704 199624703 16384000 7.8G Linux filesystem
/dev/sda16 199624704 216008703 16384000 7.8G Linux filesystem
/dev/sda17 216008704 232392703 16384000 7.8G Linux filesystem
/dev/sda18 232392704 248776703 16384000 7.8G Linux filesystem
ara88:~ # fstrim -av
/pub: 6.2 GiB (6683521024 bytes) trimmed on /dev/sda6
/home: 5.4 GiB (5838065664 bytes) trimmed on /dev/sda5
/boot/efi: 308.2 MiB (323158016 bytes) trimmed on /dev/sda1
/: 3.2 GiB (3449491456 bytes) trimmed on /dev/sda7
ara88:~ # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda7 7.7G 4.5G 2.9G 62% / **# Tumbleweed**
/dev/sda1 320M 12M 309M 4% /boot/efi
/dev/sda5 6.2G 718M 5.4G 12% /home
/dev/sda6 13G 6.0G 6.1G 50% /pub
ara88:~ # hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 1486 MB in 3.00 seconds = 494.81 MB/sec
ara88:~ # hdparm -t **/dev/sda7 # stw**
/dev/sda7:
Timing buffered disk reads: 262 MB in 3.01 seconds = **86.97 MB/sec**
ara88:~ # hdparm -t /dev/sda6
/dev/sda6:
Timing buffered disk reads: 982 MB in 3.00 seconds = 327.15 MB/sec
ara88:~ # hdparm -t /dev/sda5
/dev/sda5:
Timing buffered disk reads: 1004 MB in 3.00 seconds = 334.57 MB/sec
ara88:~ # hdparm -t **/dev/sda7 # stw**
/dev/sda7:
Timing buffered disk reads: 260 MB in 3.01 seconds = **86.42 MB/sec**
ara88:~ # hdparm -t /dev/sda8 # s15.0
/dev/sda8:
Timing buffered disk reads: 846 MB in 3.00 seconds = 281.56 MB/sec
ara88:~ # hdparm -t /dev/sda9 # s15.1
/dev/sda9:
Timing buffered disk reads: 750 MB in 3.01 seconds = 249.38 MB/sec
ara88:~ # hdparm -t /dev/sda10 # Debian 10
/dev/sda10:
Timing buffered disk reads: 560 MB in 3.00 seconds = 186.36 MB/sec
ara88:~ # hdparm -t /dev/sda11 # s15.2
/dev/sda11:
Timing buffered disk reads: 340 MB in 3.01 seconds = 112.89 MB/sec
ara88:~ # hdparm -t /dev/sda12 # Buntu 20.04
/dev/sda12:
Timing buffered disk reads: 692 MB in 3.00 seconds = 230.41 MB/sec
ara88:~ # hdparm -t /dev/sda13 # Mint 20.1
/dev/sda13:
Timing buffered disk reads: 232 MB in 3.01 seconds = 77.10 MB/sec
ara88:~ # hdparm -t /dev/sda14 # Debian 11
/dev/sda14:
Timing buffered disk reads: 460 MB in 3.01 seconds = 152.97 MB/sec
ara88:~ # hdparm -t /dev/sda15 # F33
/dev/sda15:
Timing buffered disk reads: 950 MB in 3.00 seconds = 316.40 MB/sec
ara88:~ # hdparm -t /dev/sda16 # F34
/dev/sda16:
Timing buffered disk reads: 824 MB in 3.00 seconds = 274.47 MB/sec
ara88:~ # hdparm -t /dev/sda17 # Buntu 18.04
/dev/sda17:
Timing buffered disk reads: 690 MB in 3.00 seconds = 229.73 MB/sec
ara88:~ # hdparm -t /dev/sda18 # s15.3
/dev/sda18:
Timing buffered disk reads: 672 MB in 3.00 seconds = 223.91 MB/sec
All partitions tested are EXT4.
Smartctl reports power on hours is 140, and no failure attributes. Is this considerable variation by partition normal? Should I be seeking RMA? Is there a better way to test SSD speed?
mrmazda:
Silicon Power Ace A55 2.5" 256GB SATA III 3D TLC Internal Solid State Drive (SSD) SU256GBSS3A55S25NB
https://www.enostech.com/silicon-power-ace-a55-256gb-ssd-review/
ara88:~ # fdisk -l
Disk /dev/sda: 238.49 GiB, 256060514304 bytes, 500118192 sectors
Disk model: SPCCSolidStateDi
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5B330B05-5C88-4DF5-8359-67FF3E52CDC4
Device Start End Sectors Size Type
/dev/sda1 2048 657407 655360 320M EFI System
/dev/sda2 657408 4245503 3588096 1.7G Linux swap
/dev/sda3 4245504 5064703 819200 400M Linux filesystem
/dev/sda4 5064704 13256703 8192000 3.9G Linux filesystem
/dev/sda5 13256704 26363903 13107200 6.3G Linux filesystem
/dev/sda6 26363904 52168703 25804800 12.3G Linux filesystem
/dev/sda7 52168704 68552703 16384000 7.8G Linux filesystem **# Tumbleweed**
/dev/sda8 68552704 84936703 16384000 7.8G Linux filesystem
/dev/sda9 84936704 101320703 16384000 7.8G Linux filesystem
/dev/sda10 101320704 117704703 16384000 7.8G Linux filesystem
/dev/sda11 117704704 134088703 16384000 7.8G Linux filesystem
/dev/sda12 134088704 150472703 16384000 7.8G Linux filesystem
/dev/sda13 150472704 166856703 16384000 7.8G Linux filesystem
/dev/sda14 166856704 183240703 16384000 7.8G Linux filesystem
/dev/sda15 183240704 199624703 16384000 7.8G Linux filesystem
/dev/sda16 199624704 216008703 16384000 7.8G Linux filesystem
/dev/sda17 216008704 232392703 16384000 7.8G Linux filesystem
/dev/sda18 232392704 248776703 16384000 7.8G Linux filesystem
ara88:~ # fstrim -av
/pub: 6.2 GiB (6683521024 bytes) trimmed on /dev/sda6
/home: 5.4 GiB (5838065664 bytes) trimmed on /dev/sda5
/boot/efi: 308.2 MiB (323158016 bytes) trimmed on /dev/sda1
/: 3.2 GiB (3449491456 bytes) trimmed on /dev/sda7
ara88:~ # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda7 7.7G 4.5G 2.9G 62% / **# Tumbleweed**
/dev/sda1 320M 12M 309M 4% /boot/efi
/dev/sda5 6.2G 718M 5.4G 12% /home
/dev/sda6 13G 6.0G 6.1G 50% /pub
ara88:~ # hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 1486 MB in 3.00 seconds = 494.81 MB/sec
ara88:~ # hdparm -t **/dev/sda7 # stw**
/dev/sda7:
Timing buffered disk reads: 262 MB in 3.01 seconds = **86.97 MB/sec**
ara88:~ # hdparm -t /dev/sda6
/dev/sda6:
Timing buffered disk reads: 982 MB in 3.00 seconds = 327.15 MB/sec
ara88:~ # hdparm -t /dev/sda5
/dev/sda5:
Timing buffered disk reads: 1004 MB in 3.00 seconds = 334.57 MB/sec
ara88:~ # hdparm -t **/dev/sda7 # stw**
/dev/sda7:
Timing buffered disk reads: 260 MB in 3.01 seconds = **86.42 MB/sec**
ara88:~ # hdparm -t /dev/sda8 # s15.0
/dev/sda8:
Timing buffered disk reads: 846 MB in 3.00 seconds = 281.56 MB/sec
ara88:~ # hdparm -t /dev/sda9 # s15.1
/dev/sda9:
Timing buffered disk reads: 750 MB in 3.01 seconds = 249.38 MB/sec
ara88:~ # hdparm -t /dev/sda10 # Debian 10
/dev/sda10:
Timing buffered disk reads: 560 MB in 3.00 seconds = 186.36 MB/sec
ara88:~ # hdparm -t /dev/sda11 # s15.2
/dev/sda11:
Timing buffered disk reads: 340 MB in 3.01 seconds = 112.89 MB/sec
ara88:~ # hdparm -t /dev/sda12 # Buntu 20.04
/dev/sda12:
Timing buffered disk reads: 692 MB in 3.00 seconds = 230.41 MB/sec
ara88:~ # hdparm -t /dev/sda13 # Mint 20.1
/dev/sda13:
Timing buffered disk reads: 232 MB in 3.01 seconds = 77.10 MB/sec
ara88:~ # hdparm -t /dev/sda14 # Debian 11
/dev/sda14:
Timing buffered disk reads: 460 MB in 3.01 seconds = 152.97 MB/sec
ara88:~ # hdparm -t /dev/sda15 # F33
/dev/sda15:
Timing buffered disk reads: 950 MB in 3.00 seconds = 316.40 MB/sec
ara88:~ # hdparm -t /dev/sda16 # F34
/dev/sda16:
Timing buffered disk reads: 824 MB in 3.00 seconds = 274.47 MB/sec
ara88:~ # hdparm -t /dev/sda17 # Buntu 18.04
/dev/sda17:
Timing buffered disk reads: 690 MB in 3.00 seconds = 229.73 MB/sec
ara88:~ # hdparm -t /dev/sda18 # s15.3
/dev/sda18:
Timing buffered disk reads: 672 MB in 3.00 seconds = 223.91 MB/sec
All partitions tested are EXT4.
Smartctl reports power on hours is 140, and no failure attributes. Is this considerable variation by partition normal? Should I be seeking RMA? Is there a better way to test SSD speed?
Hi
The fio command may shed better info? https://arstechnica.com/gadgets/2020/02/how-fast-are-your-disks-find-out-the-open-source-way-with-fio/ a google of “test hard disk speed with fio” provides other links to tests…
Then of course: https://fio.readthedocs.io/en/latest/fio_doc.html I’ve used fio.bash in the past… (See my old blog post for link: https://forums.opensuse.org/entry.php/159-Setting-up-bcache-on-openSUSE-13-2 ).
Looking at the 7 year old post –
hdparm is reporting for cached reads 2211 MB/s and, for buffered reads 194 MB/s.
fio is reporting for sequential reads 182999 KB/s (183 MB/s) and for random reads 159028 KB/s (159 MB/s).
Therefore, fio seems to be reporting buffered reads …
[HR][/HR]On this system, ext4 root partition on the SSD, the results are as follows:
# lsblk --fs
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda
├─sda1 vfat 2539-7D44 492,2M 2% /boot/efi
├─sda2 ext4 c59a64bf-b464-4ea2-bf3a-d3fd9dded03f 67,9G 31% /
└─sda3 swap e96bee61-d116-4700-a761-c72153babfde [SWAP]
#
# inxi --admin --disk
Drives:
ID-1: /dev/sda vendor: Intenso model: SSD Sata III size: 111.79 GiB block size: physical: 512 B logical: 512 B
sata: 3.2 speed: 6.0 Gb/s serial: AA000000000000035990 rev: 2A0 scheme: GPT
SMART: yes state: enabled health: PASSED on: 54d 6h cycles: 227 read: 9.6 MiB written: 1.4 MiB
#
# hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 1572 MB in 3.00 seconds = 523.68 MB/sec
#
# hdparm -t /dev/sda2
/dev/sda2:
Timing buffered disk reads: 1542 MB in 3.00 seconds = 513.97 MB/sec
#
Just for comparison – older Lenovo Laptop with a Seagate FireCuda SSHD and Btrfs system partition –
# lsblk --fs
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda
├─sda1 vfat 85FD-C13A 148,2M 5% /boot/efi
├─sda2 swap c0a219eb-5ea7-470b-9297-3e0d70964554 [SWAP]
├─sda3 btrfs fa6a0367-a191-447e-8298-35761792b861 54,5G 31% /
└─sda4 xfs 31ee5d31-773e-4a8f-8862-876413323c28 651,2G 23% /home
sr0
#
# inxi --admin --disk
Drives: Local Storage: total: 931.51 GiB used: 216.96 GiB (23.3%)
ID-1: /dev/sda vendor: Seagate model: ST1000LX015-1U7172 family: FireCuda 2.5 size: 931.51 GiB
block size: physical: 4096 B logical: 512 B sata: 3.1 speed: 6.0 Gb/s rotation: 5400 rpm serial: WES3V1CF
rev: SDM1 temp: 24 C scheme: GPT
SMART: yes state: enabled health: PASSED on: 50d 16h cycles: 732 read: 6.25 TiB written: 3.19 TiB
Pre-Fail: attribute: Spin_Retry_Count value: 100 worst: 100 threshold: 97
#
# hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 382 MB in 3.00 seconds = 127.13 MB/sec
# hdparm -t /dev/sda3
/dev/sda3:
Timing buffered disk reads: 392 MB in 3.01 seconds = 130.07 MB/sec
# hdparm -t /dev/sda4
/dev/sda4:
Timing buffered disk reads: 350 MB in 3.01 seconds = 116.24 MB/sec
#
BTW, the cached reads are about the same as those for an SSD –
# hdparm -T /dev/sda
/dev/sda:
Timing cached reads: 4704 MB in 2.00 seconds = 2353.31 MB/sec
# hdparm -T /dev/sda3
/dev/sda3:
Timing cached reads: 4656 MB in 2.00 seconds = 2329.43 MB/sec
# hdparm -T /dev/sda4
/dev/sda4:
Timing cached reads: 4796 MB in 2.00 seconds = 2398.82 MB/sec
#
@mrmazda :
Given the consistency of my results (after I removed and then re-installed the Laptop’s SSHD – the Desktop is only a few months old) – what happens if your SSD’s connector is exercised?
mrmazda:
Silicon Power Ace A55 2.5" 256GB SATA III 3D TLC Internal Solid State Drive (SSD) SU256GBSS3A55S25NB
https://www.enostech.com/silicon-power-ace-a55-256gb-ssd-review/
All partitions tested are EXT4.
Smartctl reports power on hours is 140, and no failure attributes. Is this considerable variation by partition normal? Should I be seeking RMA? Is there a better way to test SSD speed?
Results are abysmal. Typical values for 850 EVO 250GB are:
**3400G:~ #** hdparm -t /dev/sda
sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sda8 sda9
**3400G:~ #** hdparm -t /dev/sda1
/dev/sda1:
Timing buffered disk reads: 100 MB in 0.19 seconds = 525.19 MB/sec
**3400G:~ #** hdparm -t /dev/sda2
/dev/sda2:
Timing buffered disk reads: 1576 MB in 3.00 seconds = 525.23 MB/sec
**3400G:~ #** hdparm -t /dev/sda9
/dev/sda9:
Timing buffered disk reads: 1598 MB in 3.00 seconds = 532.44 MB/sec
**3400G:~ #**
hdparm tests the interface, not the filesystem. You may want to test the drive on different hardware before seeking a RMA.
dcurtisfra:
BTW, the cached reads are about the same as those for an SSD –
# hdparm -T /dev/sda
/dev/sda:
Timing cached reads: 4704 MB in 2.00 seconds = 2353.31 MB/sec
# hdparm -T /dev/sda3
/dev/sda3:
Timing cached reads: 4656 MB in 2.00 seconds = 2329.43 MB/sec
# hdparm -T /dev/sda4
/dev/sda4:
Timing cached reads: 4796 MB in 2.00 seconds = 2398.82 MB/sec
#
Cached reads test memory speed. Your laptop has lame RAM and CPU bus. The 6700K dates from Q3’15, but spots some 34.1 GB/s resulting in:
**erlangen:~ #** hdparm -T /dev/sdc
/dev/sdc:
Timing cached reads: 38398 MB in 1.99 seconds = 19335.91 MB/sec
**erlangen:~ #**
mrmazda
January 20, 2021, 6:44am
#7
I was able to reach through the missing floppies case hole to work the connector on the SSD. Other end and cable swap will have to wait until I have space and time and known good cable to dig the PC out from its ensconcement to try more. This SSD was new in June, but only put in service in mid-October. I ran some more after the reseat today:
2021-01-18 runs | 2021-01-20 runs
2021-01-18 runs | 2021-01-20 runs
ara88:~ # hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 1486 MB in 3.00 seconds = 494.81 MB/sec 494 494 499 497 496
ara88:~ # hdparm -t /dev/sda1
/dev/sda1:
Timing buffered disk reads: 508 484 488 482 483
ara88:~ # hdparm -t /dev/sda3
/dev/sda3:
Timing buffered disk reads: 290 289
ara88:~ # hdparm -t /dev/sda4
/dev/sda4:
Timing buffered disk reads: 301 322
ara88:~ # hdparm -t /dev/sda6
/dev/sda6:
Timing buffered disk reads: 982 MB in 3.00 seconds = 327.15 MB/sec 339 350
ara88:~ # hdparm -t **/dev/sda7 # stw**
/dev/sda7:
Timing buffered disk reads: 262 MB in 3.01 seconds = **86.97 MB/sec** 64 62 61 62 61
ara88:~ # hdparm -t **/dev/sda7 # stw**
/dev/sda7:
Timing buffered disk reads: 260 MB in 3.01 seconds = **86.42 MB/sec**
ara88:~ # hdparm -t /dev/sda8 # s15.0
/dev/sda8:
Timing buffered disk reads: 846 MB in 3.00 seconds = 281.56 MB/sec 270 309
ara88:~ # hdparm -t /dev/sda9 # s15.1
/dev/sda9:
Timing buffered disk reads: 750 MB in 3.01 seconds = 249.38 MB/sec 258 273
ara88:~ # hdparm -t /dev/sda10 # Debian 10
/dev/sda10:
Timing buffered disk reads: 560 MB in 3.00 seconds = 186.36 MB/sec 190 199
ara88:~ # hdparm -t /dev/sda11 # s15.2
/dev/sda11:
Timing buffered disk reads: 340 MB in 3.01 seconds = 112.89 MB/sec 121 149
ara88:~ # hdparm -t /dev/sda12 # Buntu 20.04
/dev/sda12:
Timing buffered disk reads: 692 MB in 3.00 seconds = 230.41 MB/sec 229 252
ara88:~ # hdparm -t /dev/sda13 # Mint 20.1
/dev/sda13:
Timing buffered disk reads: 232 MB in 3.01 seconds = 77.10 MB/sec 85 75 79 79 75
ara88:~ # hdparm -t /dev/sda14 # Debian 11
/dev/sda14:
Timing buffered disk reads: 460 MB in 3.01 seconds = 152.97 MB/sec 159 157
ara88:~ # hdparm -t /dev/sda15 # F33
/dev/sda15:
Timing buffered disk reads: 950 MB in 3.00 seconds = 316.40 MB/sec 319 332
ara88:~ # hdparm -t /dev/sda16 # F34
/dev/sda16:
Timing buffered disk reads: 824 MB in 3.00 seconds = 274.47 MB/sec 272 266
ara88:~ # hdparm -t /dev/sda17 # Buntu 18.04
/dev/sda17:
Timing buffered disk reads: 690 MB in 3.00 seconds = 229.73 MB/sec 233 175
ara88:~ # hdparm -t /dev/sda18 # s15.3
/dev/sda18:
Timing buffered disk reads: 672 MB in 3.00 seconds = 223.91 MB/sec 222 221
All partitions tested are EXT4 except sda1 is VFAT/ESP.
I see variance among partitions as a real problem, variance from run to run overall less so, but as I mostly only booted TW since my OP and now, it looks like the more use, the more slowdown, and quickly so. Also noteworthy, running fstrim without unneeded partitions mounted took seriously longer than the others. I ran more hdparm -t on sda7 after the fstrim: 62 63 62 62 61. On sda: 496 501 497 495 496.
Edit: I did not know about posts 5 & 6 until after I posted this.
@mrmazda :
Looking at some Tumbleweed boot times posted by karlmistelberger it seems that, a NVME drive plugged into a PCI Express slot exceeds the I/O performance achievable with a SATA cable by a large margin …
Hi
This Tumbleweed setup was installed in late 2019… (nvme is in a PCIx4 card)
systemd-analyze
Startup finished in 2.240s (kernel) + 1.847s (initrd) + 2.681s (userspace) = 6.769s
graphical.target reached after 2.672s in userspace
System: Kernel: 5.10.7-1-default x86_64 bits: 64 compiler: N/A Desktop: Gnome 3.38.2 wm: gnome-shell dm: GDM
Distro: openSUSE Tumbleweed 20210118
CPU: Topology: Quad Core model: Intel Xeon E3-1245 V2 bits: 64 type: MT MCP arch: Ivy Bridge rev: 9 L1 cache: 256 KiB
L2 cache: 8192 KiB L3 cache: 8000 KiB
flags: avx lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 54273
Speed: 1597 MHz min/max: 1600/3800 MHz Core speeds (MHz): 1: 1597 2: 1596 3: 1597 4: 1999 5: 1599 6: 1654 7: 1597
8: 1597
lsblk --fs
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda
├─sda1 vfat efi D042-5F86 252.1M 3% /boot/efi
├─sda2 btrfs boot a1dbf17b-ac1f-411d-8104-13a6c1249a13 494.2M 24% /boot
├─sda3 xfs stuff 0b48f61b-84bd-427a-86bc-b4f02ebe4be9 164.6G 28% /stuff
└─sda4 swap fceaf89a-28c9-4b66-a61b-dd8ec9814c4d [SWAP]
nvme0n1
├─nvme0n1p1 btrfs tumbleweed f284e754-2ad3-447c-a734-b1b2a0209563 11G 70% /
└─nvme0n1p2 xfs data c72d0613-29ec-4b04-a2f7-a0280f453c84 130G 33% /data
inxi --admin --disk
Drives: Local Storage: total: 465.77 GiB used: 156.38 GiB (33.6%)
ID-1: /dev/nvme0n1 vendor: Western Digital model: WDS250G1B0C-00S6U0 size: 232.89 GiB block size: physical: 512 B
logical: 512 B speed: 15.8 Gb/s lanes: 2 serial: 191113442407 rev: 201000WD temp: 36 C scheme: GPT
SMART: yes health: PASSED on: 1y 121d 14h cycles: 223 read-units: 4,787,945 [2.45 TB]
written-units: 13,759,246 [7.04 TB]
ID-2: /dev/sda vendor: Western Digital model: WDS250G2B0B-00YS70 family: WD Blue / Red / Green SSDs
size: 232.89 GiB block size: physical: 512 B logical: 512 B sata: 3.3 speed: 6.0 Gb/s serial: 1812A3801583
rev: 30WD temp: 34 C scheme: GPT
SMART: yes state: enabled health: PASSED on: 1y 67d 1h cycles: 136
hdparm -t /dev/sda1
/dev/sda1:
Timing buffered disk reads: 260 MB in 0.50 seconds = 521.19 MB/sec
hdparm -t /dev/sda2
/dev/sda2:
Timing buffered disk reads: 768 MB in 1.44 seconds = 531.66 MB/sec
hdparm -t /dev/sda3
/dev/sda3:
Timing buffered disk reads: 1604 MB in 3.00 seconds = 534.46 MB/sec
hdparm -t /dev/nvme0n1p1
/dev/nvme0n1p1:
Timing buffered disk reads: 1702 MB in 3.02 seconds = 564.47 MB/sec
hdparm -t /dev/nvme0n1p2
/dev/nvme0n1p2:
Timing buffered disk reads: 3334 MB in 3.00 seconds = 1110.87 MB/sec
hdparm -T /dev/sda
/dev/sda:
Timing cached reads: 26006 MB in 1.99 seconds = 13067.88 MB/sec
hdparm -T /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 25400 MB in 1.99 seconds = 12762.95 MB/sec
mrmazda
January 21, 2021, 8:27am
#10
I don’t like yours any more than I like mine in OP here. Another here with NVME is a similarly puzzling disappointment:
# inxi -ISCy
System:
Host: ab250 Kernel: 5.8.14-1-default x86_64 bits: 64
Desktop: Trinity R14.0.8 Distro: openSUSE Tumbleweed 20201014
CPU:
Info: Dual Core model: Intel Pentium G4600 bits: 64 type: MT MCP
L2 cache: 3 MiB
Speed: 800 MHz min/max: 800/3600 MHz Core speeds (MHz): 1: 800 2: 800 3: 800
4: 802
Info:...Shell: Bash **inxi: 3.2.01**
# inxi -day
Drives:
Local Storage: total: 119.24 GiB used: 29.73 GiB (24.9%)
ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: ZTC model: PCIEG3-128G
size: 119.24 GiB block size: physical: 512 B logical: 512 B speed: 31.6 Gb/s
lanes: 4 serial: 979021901256 rev: R0629A0 temp: 37.9 C
SMART: yes health: PASSED on: 1 hrs cycles: 189 read-units: 882,909 [452 GB]
written-units: 466,531 [238 GB]
Optical-1: /dev/sr0 vendor: Optiarc model: DVD RW AD-7200S rev: 1.06
dev-links: cdrom,cdrw,dvd,dvdrw
Features: speed: 48 multisession: yes audio: yes dvd: yes
rw: cd-r,cd-rw,dvd-r,dvd-ram state: running
# lsblk --fs
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sr0
nvme0n1
├─nvme0n1p1 vfat ZM2P01ESP 20A0-2A08 309.1M 3% /boot/efi
├─nvme0n1p2 swap zm2p02swap b680fedf-0413-4b4e-9f05-e2236583ab55
├─nvme0n1p3 ext2 zm2p03res 37701fcc-9448-418d-9ba1-bc231c921acd 105.6M 73% /disks/res
├─nvme0n1p4 ext4 zm2p04usrlcl 840c447a-f699-4f83-aadb-d86fa59401e2 2.2G 42% /usr/local
├─nvme0n1p5 ext4 zm2p05home b03449e0-f033-4a36-a3d0-0c522bf4671a 5.5G 9% /home
├─nvme0n1p6 ext4 zm2p06pub 6c88d146-e0de-438c-af47-e42a9f4e48ff 6.1G 49% /pub
├─nvme0n1p7 ext4 zm2p07stw 48c748eb-dcdc-4d41-a145-97ccd8465147 3G 56% /
├─nvme0n1p8 ext4 zm2p08s150 fad1f58b-9e41-4f78-992c-06ce64577af8 992.3M 82% /disks/s150
├─nvme0n1p9 ext4 zm2p09s151 efe751cb-be1d-4cda-bef6-35e09dde6f16 1.5G 75% /disks/s151
├─nvme0n1p10 ext4 zm2p10deb10 057b7d14-bffc-466f-a631-057e6404a38f
├─nvme0n1p11 ext4 zm2p11s152 506b8ec1-9707-4b9c-b495-198e971b18e2 2.3G 65% /disks/s152
├─nvme0n1p12 ext4 zm2p12ub2004 96634c08-f9b0-4b0b-a84e-a2ac735ac0aa
├─nvme0n1p13 ext4 zm2p13mint19 a26c4c6a-2492-478f-a558-bb4b8c487ad6
├─nvme0n1p14 ext4 zm2p14deb11 7b2046d1-558b-4a56-8315-32119eb6c1c4
├─nvme0n1p15 ext4 zm2p15f33 f5dba4cc-525f-48c1-becd-586a48e8b805
├─nvme0n1p16 ext4 zm2p16s152 ab2bec6c-6a0a-46bc-8765-6270ccbbc6b2
├─nvme0n1p17 ext4 zm2p17ub1804 0a9f756f-8f78-4992-8e91-58185f3b36e0
└─nvme0n1p18 ext4 zm2p18f34 5f04914d-d495-40fd-9f06-3f3eab72a1dd
# hdparm -t /dev/nvme0n1
/dev/nvme0n1:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3214 MB in 3.00 seconds = 1071.21 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p1
/dev/nvme0n1p1:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 320 MB in 0.13 seconds = 2402.76 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p3
/dev/nvme0n1p3:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 400 MB in 0.36 seconds = 1109.99 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p4
/dev/nvme0n1p4:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 4000 MB in 2.62 seconds = 1525.33 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p5
/dev/nvme0n1p5:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 6400 MB in 2.62 seconds = 2439.66 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p6
/dev/nvme0n1p6:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3020 MB in 3.00 seconds = 1006.04 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p7
/dev/nvme0n1p7:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 2330 MB in 3.00 seconds = 776.52 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p8
/dev/nvme0n1p8:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 2734 MB in 3.00 seconds = 910.69 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p9
/dev/nvme0n1p9:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 2652 MB in 3.00 seconds = 883.48 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p10
/dev/nvme0n1p10:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3546 MB in 3.00 seconds = 1181.94 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p11
/dev/nvme0n1p11:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 2178 MB in 3.00 seconds = 725.06 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p12
/dev/nvme0n1p12:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3740 MB in 3.00 seconds = 1246.51 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p13
/dev/nvme0n1p13:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 2494 MB in 3.00 seconds = 830.89 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p14
/dev/nvme0n1p14:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3772 MB in 3.00 seconds = 1256.67 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p15
/dev/nvme0n1p15:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 2842 MB in 3.00 seconds = 947.09 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p16
/dev/nvme0n1p16:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: ^A3222 MB in 3.00 seconds = 1073.95 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p17
/dev/nvme0n1p17:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3690 MB in 3.00 seconds = 1229.79 MB/sec
ab250:~ # hdparm -t /dev/nvme0n1p18
/dev/nvme0n1p18:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 3698 MB in 3.00 seconds = 1232.16 MB/sec
ab250:~ # hdparm -T /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 17714 MB in 1.99 seconds = 8886.99 MB/sec
ab250:~ # hdparm -T /dev/nvme0n1p7
/dev/nvme0n1p7:
Timing cached reads: 18106 MB in 1.99 seconds = 9084.70 MB/sec
SSDs experience some degradation. The new 950 PRO 512GB started with 2300+ MB/s. Heavily used partitions are down now:
**erlangen:~ #** hdparm -t /dev/nvme0n1
nvme0n1 nvme0n1p1 nvme0n1p2 nvme0n1p3 nvme0n1p4
**erlangen:~ #** hdparm -t /dev/nvme0n1p1
/dev/nvme0n1p1:
Timing buffered disk reads: 100 MB in 0.06 seconds = 1606.14 MB/sec
**erlangen:~ #** hdparm -t /dev/nvme0n1p2
/dev/nvme0n1p2:
Timing buffered disk reads: 4990 MB in 3.00 seconds = 1663.12 MB/sec
**erlangen:~ #** hdparm -t /dev/nvme0n1p3
/dev/nvme0n1p3:
Timing buffered disk reads: 5262 MB in 3.00 seconds = 1753.59 MB/sec
**erlangen:~ #** hdparm -t /dev/nvme0n1p4
/dev/nvme0n1p4:
Timing buffered disk reads: 6140 MB in 3.00 seconds = 2045.90 MB/sec
**erlangen:~ #** hdparm -T /dev/nvme0n1p4
/dev/nvme0n1p4:
Timing cached reads: 36316 MB in 1.99 seconds = 18281.52 MB/sec
**erlangen:~ #**
This could be fixed by the built-in secure erase features should it worsen over time. Something is rotten on your machine?
mrmazda:
I don’t like yours any more than I like mine in OP here. Another here with NVME is a similarly puzzling disappointment:
# inxi -ISCy
Syste
# inxi -day
Drives:
Local Storage: total: 119.24 GiB used: 29.73 GiB (24.9%)
ID-1: /dev/nvme0n1 maj-min: 259:0 vendor: ZTC model: PCIEG3-128G
size: 119.24 GiB block size: physical: 512 B logical: 512 B speed: 31.6 Gb/s
lanes: 4 serial: 979021901256 rev: R0629A0 temp: 37.9 C
Hi
My device is only running at half speed in the slot I have available 15.8 vs yours at 31.6, for me it’s expected to be slower… but still good for me…
Svyatko
January 21, 2021, 4:20pm
#13
mrmazda
January 22, 2021, 7:20am
#14
How old is old? This is a Kaby Lake CPU released Q1 2017, newest CPU I have:
# inxi -ISCy
System:
Host: gb250 Kernel: 5.9.14-1-default x86_64 bits: 64
Desktop: Trinity R14.0.9 Distro: openSUSE Tumbleweed 20201231
CPU:
Info: Dual Core model: Intel Core **i3-7100T** bits: 64 type: MT MCP L2 cache: 3 MiB
Speed: 801 MHz min/max: 800/3400 MHz Core speeds (MHz): 1: 801 2: 801 3: 800 4: 800
Info:
Processes: 210 Uptime: N/A Memory: 15.52 GiB used: 594.7 MiB (3.7%)
Shell: Bash **inxi: 3.2.02**
# inxi -day
Drives: # rotating rust omitted, not tested
Local Storage: total: raw: ... usable: ... used: ... (4.5%)
ID-1: /dev/nvme0n1 maj-min: 259:4 vendor: **Mushkin** model: MKNSSDPL120GB-D8
**size: 111.79 GiB** block size: physical: 512 B logical: 512 B speed: 31.6 Gb/s
lanes: 4 serial: MK1805141003E5423 rev: SVN105 temp: 30.9 C
SMART: yes health: PASSED on: 12d 8h cycles: 239
read-units: 870,191 [445 GB] written-units: 1,725,206 [883 GB]
# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 111.81 GiB, 120034123776 bytes, 234441648 sectors
Disk model: MKNSSDPL120GB-D8
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5E153119-128A-4DF5-81AC-5B6AFB848982
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 657407 655360 320M EFI System
/dev/nvme0n1p2 657408 3682303 3024896 1.5G Linux swap
/dev/nvme0n1p3 3682304 4501503 819200 400M Linux filesystem
/dev/nvme0n1p4 4501504 12693503 8192000 3.9G Linux filesystem
/dev/nvme0n1p5 12693504 25800703 13107200 6.3G Linux filesystem
/dev/nvme0n1p6 25800704 51605503 25804800 12.3G Linux filesystem
/dev/nvme0n1p7 51605504 67989503 16384000 7.8G Linux filesystem # **TW root**
/dev/nvme0n1p8 67989504 84373503 16384000 7.8G Linux filesystem
/dev/nvme0n1p9 84373504 100757503 16384000 7.8G Linux filesystem
/dev/nvme0n1p10 100757504 117141503 16384000 7.8G Linux filesystem
/dev/nvme0n1p11 117141504 133525503 16384000 7.8G Linux filesystem
/dev/nvme0n1p12 133525504 149909503 16384000 7.8G Linux filesystem
/dev/nvme0n1p13 149909504 166293503 16384000 7.8G Linux filesystem
/dev/nvme0n1p14 166293504 182677503 16384000 7.8G Linux filesystem
/dev/nvme0n1p15 182677504 199061503 16384000 7.8G Linux filesystem
/dev/nvme0n1p16 199061504 215445503 16384000 7.8G Linux filesystem
## mitigations=auto | mitigations=none
# hdparm -t /dev/nvme0n1
/dev/nvme0n1:
Timing buffered disk reads: 2460 MB in 3.00 seconds = 819.89 MB/sec 819
# hdparm -t /dev/nvme0n1p1
/dev/nvme0n1p1:
Timing buffered disk reads: 320 MB in 0.16 seconds = 1983.40 MB/sec 1959
# hdparm -t /dev/nvme0n1p3
/dev/nvme0n1p3:
Timing buffered disk reads: 400 MB in 0.42 seconds = 947.03 MB/sec 948
# hdparm -t /dev/nvme0n1p4
/dev/nvme0n1p4:
Timing buffered disk reads: 2080 MB in 3.00 seconds = 693.23 MB/sec 699
# hdparm -t /dev/nvme0n1p5
/dev/nvme0n1p5:
Timing buffered disk reads: 5192 MB in 3.00 seconds = 1730.38 MB/sec 1740
# hdparm -t /dev/nvme0n1p6
/dev/nvme0n1p6:
Timing buffered disk reads: 1832 MB in 3.00 seconds = 609.68 MB/sec 610
# hdparm -t /dev/nvme0n1p7
/dev/nvme0n1p7: **# TW root**
Timing buffered disk reads: 1862 MB in 3.00 seconds = **620.09 MB/sec ** **620**
# hdparm -t /dev/nvme0n1p8
/dev/nvme0n1p8:
Timing buffered disk reads: 1924 MB in 3.00 seconds = 641.00 MB/sec 641
# hdparm -t /dev/nvme0n1p9
/dev/nvme0n1p9:
Timing buffered disk reads: 1432 MB in 3.00 seconds = 476.69 MB/sec 476
# hdparm -t /dev/nvme0n1p10
/dev/nvme0n1p10:
Timing buffered disk reads: 2368 MB in 3.00 seconds = 788.87 MB/sec 790
# hdparm -t /dev/nvme0n1p11
/dev/nvme0n1p11:
Timing buffered disk reads: 1538 MB in 3.00 seconds = 512.36 MB/sec 517
# hdparm -t /dev/nvme0n1p12
/dev/nvme0n1p12:
Timing buffered disk reads: 2786 MB in 3.00 seconds = 927.98 MB/sec 927
# hdparm -t /dev/nvme0n1p13
/dev/nvme0n1p13:
Timing buffered disk reads: 2922 MB in 3.00 seconds = 972.91 MB/sec 965
# hdparm -t /dev/nvme0n1p14
/dev/nvme0n1p14:
Timing buffered disk reads: 2514 MB in 3.00 seconds = 837.80 MB/sec 844
# hdparm -t /dev/nvme0n1p15
/dev/nvme0n1p15:
Timing buffered disk reads: 2772 MB in 3.00 seconds = 923.70 MB/sec 926
# hdparm -t /dev/nvme0n1p16
/dev/nvme0n1p16:
Timing buffered disk reads: 3004 MB in 3.00 seconds = 1001.26 MB/sec 999
# hdparm -T /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 16020 MB in 2.00 seconds = 8026.57 MB/sec 8304
No statistical difference with/without mitigations, but similarly erratic among partitions, like my others in this thread so far, both 2.5" SATA SSD and M.2 NVME.
Hmmmm … Maybe I’ll replace my (7 years old) Laptop’s {2nd} SSHD by a HDD rather than, a 1 Terabyte SSD … >:)
Hi
One thought, what scheduler is used… should be none these days for NVME devices…
mrmazda
February 13, 2021, 9:00am
#17
Scheduler?
Another disappointer:
# fdisk -l
Disk /dev/nvme0n1: 119.2 GiB, 128035676160 bytes, 250069680 sectors
Disk model: ZTC-PCIEG3-128G
# hdparm -t /dev/nvme0n1p5
/dev/nvme0n1p5:
Timing buffered disk reads: 6400 MB in 2.74 seconds = 2336.11 MB/sec
# hdparm -t /dev/nvme0n1p6
/dev/nvme0n1p6:
Timing buffered disk reads: 3022 MB in 3.00 seconds = 1006.78 MB/sec
# hdparm -t /dev/nvme0n1p7
/dev/nvme0n1p7:
Timing buffered disk reads: 2278 MB in 3.00 seconds = 759.12 MB/sec
# hdparm -t /dev/nvme0n1p8
/dev/nvme0n1p8:
Timing buffered disk reads: 2818 MB in 3.00 seconds = 938.71 MB/sec
# hdparm -t /dev/nvme0n1p9
/dev/nvme0n1p9:
Timing buffered disk reads: 2344 MB in 3.00 seconds = 780.97 MB/sec
# hdparm -t /dev/nvme0n1
/dev/nvme0n1:
Timing buffered disk reads: 3374 MB in 3.00 seconds = 1124.64 MB/sec
# hdparm -t /dev/nvme0n1p10
/dev/nvme0n1p10:
Timing buffered disk reads: 3500 MB in 3.00 seconds = 1166.01 MB/sec
# hdparm -t /dev/nvme0n1p11
/dev/nvme0n1p11:
Timing buffered disk reads: 2252 MB in 3.00 seconds = 750.44 MB/sec
# hdparm -t /dev/nvme0n1p12
/dev/nvme0n1p12:
Timing buffered disk reads: 3764 MB in 3.00 seconds = 1254.60 MB/sec
# hdparm -t /dev/nvme0n1p13
/dev/nvme0n1p13:
Timing buffered disk reads: 2492 MB in 3.00 seconds = 830.19 MB/sec
# hdparm -t /dev/nvme0n1p14
/dev/nvme0n1p14:
Timing buffered disk reads: 3756 MB in 3.00 seconds = 1251.76 MB/sec
# hdparm -t /dev/nvme0n1p15
/dev/nvme0n1p15:
Timing buffered disk reads: 2848 MB in 3.00 seconds = 948.74 MB/sec
# hdparm -t /dev/nvme0n1p16
/dev/nvme0n1p16:
Timing buffered disk reads: 3202 MB in 3.00 seconds = 1066.93 MB/sec
hdparm -t /dev/nvme0n1p17
/dev/nvme0n1p17:
Timing buffered disk reads: 3696 MB in 3.00 seconds = 1231.42 MB/sec
# hdparm -t /dev/nvme0n1p18
/dev/nvme0n1p18:
Timing buffered disk reads: 3724 MB in 3.00 seconds = 1240.84 MB/sec
# hdparm -t /dev/nvme0n1p1
/dev/nvme0n1p1:
Timing buffered disk reads: 320 MB in 0.12 seconds = 2744.19 MB/sec
# hdparm -t /dev/nvme0n1p2
/dev/nvme0n1p2:
Timing buffered disk reads: 1752 MB in 1.84 seconds = 951.63 MB/sec
# hdparm -t /dev/nvme0n1p3
/dev/nvme0n1p3:
Timing buffered disk reads: 400 MB in 0.36 seconds = 1101.07 MB/sec
# hdparm -t /dev/nvme0n1p4
/dev/nvme0n1p4:
Timing buffered disk reads: 4000 MB in 2.51 seconds = 1593.50 MB/sec
Svyatko
February 13, 2021, 1:20pm
#18
Check temperatures.
With NVME you need additional cooling - radiators and/or fan.
ZTC-PCIEG3-128G: cheap disk with controller Silicon Motion SM2263XT, no DRAM buffer, using HMB (Host Memory Buffer).
Maybe observed behaviour is normal.
mrmazda:
Scheduler?
Hi
The disk I/O Scheduler…
cat /sys/block/nvme0n1/queue/scheduler
[none] mq-deadline kyber bfq
larryr
February 13, 2021, 5:39pm
#20
cat /sys/block/nvme0n1/queue/scheduler
[none] mq-deadline kyber bfq
Is it safe to assume that the one in ] is the one in use?
Like the [none] in your example.