Seagate Barracuda vs WD Blue: Which is best for a Linux home partition?

I’ve had a 2TB hard drive from Seagate for roughly a decade. Despite having no read / write errors to this day, it’s getting old and I’m nervous about using it for data storage… especially since every once in a while I think I hear a click (could be just a fan though). I’m going to upgrade it to something newer, especially since it’s filling up and I need a 4TB drive now. I’m thorn between two choices, which judging based on reviews are both very tied and hard to call… it would help to have your input please.

Option 1: Seagate BarraCuda 4TB, 5400rpm, 256MB cache

Option 2: WD Blue 4TB, 5400rpm, 64MB cache

I’m most interested in reliability: I don’t care which is fastest, only which is guaranteed to last the most without breaking. Write performance in particular is secondary, as the most file-size intensive things I use are some games so read speeds would be more important. At this chapter I only understand that at least the Barracuda uses SMR technology instead of CMR, which yields in less performance to writing… not sure about the Blue one.

I plan to use it with Linux openSUSE Tumbleweed x64, mounted from fstab as a data drive supplementing my home partition: There will be a single ext4 partition on it (will likely format it from the YaST2 Partitioner), data will be transferred using rsync. I’m asking on the openSUSE / Linux forums too as I’d like to know which is expected to work best under this OS and with Linux in general.

Note: Please don’t suggest “for X more you could get the Y enterprise version which is Z better”. I already looked at the available options, given the stores I can buy from and the budget I must stay under… the two I linked above are the only versions I can get. The only similar option available in one store is another version of the Blue, same specs except it says “256MB cache”… let me know if that has any advantage instead.

Don’t cpunt on me, I am notorious bad in talking about hardware. Thus you can skip what I say, or forget it, what ever you like. But this struck me:

It really wonders me. Why taking the trouble to cut the cake into one piece?

I went through the same head-scratching procedure a month or so ago …

  • The main difference between the two is, the cache size – Barracuda 256 MB – Blue 64 MB …
  • Both come with a limited 2 year warranty …
  • Read and Write I/O performance is about the same – despite the difference in cache size …

To be honest, I had difficulty in choosing between one of the other – SMART numbers – “smartctl” …

  • My 500 GB Seagate Barracuda has 37759 Head Flying Hours and, 28721 Power On Hours and, is exhibiting non-zero Read Error Rate and non-zero Hardware ECC recovered errors – which is why I’ve retired the thing …
  • A 1 TB WD Blue on this system has 18105 Power On Hours and Power-Off Retract Count of 70 – with no ECC errors – I chose a 4 TB “Blue” …

I am using both and both work fine.

3400G:~ # fdisk -l /dev/sdc
Disk /dev/sdc: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM001-1CH1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 609FED7F-FC4F-43A7-9D25-44783253D69C

Device          Start        End    Sectors  Size Type
/dev/sdc1        2048      34815      32768   16M Microsoft reserved
/dev/sdc2    33761280 3770382335 3736621056  1.8T Linux filesystem
/dev/sdc3  3770382336 3770587135     204800  100M EFI System
/dev/sdc4  3770587136 3905975094  135387959 64.6G Microsoft basic data
/dev/sdc5  3905976320 3907026943    1050624  513M Windows recovery environment
3400G:~ # smartctl -i /dev/sdc
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.9.8-2-default] (SUSE RPM)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 7200.14 (AF)
Device Model:     ST2000DM001-1CH164
Serial Number:    W3408N5F
LU WWN Device Id: 5 000c50 03d25fc49
Firmware Version: CC29
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Nov 23 19:56:21 2020 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

3400G:~ # smartctl -A /dev/sdc
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.9.8-2-default] (SUSE RPM)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   118   099   006    Pre-fail  Always       -       186540912
  3 Spin_Up_Time            0x0003   095   094   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   099   099   020    Old_age   Always       -       1984
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   075   060   030    Pre-fail  Always       -       29653910
  9 Power_On_Hours          0x0032   089   089   000    Old_age   Always       -       10051
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   099   099   020    Old_age   Always       -       1707
183 Runtime_Bad_Block       0x0032   099   099   000    Old_age   Always       -       1
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   098   000    Old_age   Always       -       1 31 32
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   069   054   045    Old_age   Always       -       31 (0 2 31 21 0)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       401
193 Load_Cycle_Count        0x0032   090   090   000    Old_age   Always       -       21040
194 Temperature_Celsius     0x0022   031   046   000    Old_age   Always       -       31 (128 0 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       6272h+57m+05.080s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       12684254846
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       11655324453

3400G:~ # 
erlangen:~ # fdisk -l /dev/sda    
Disk /dev/sda: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors 
Disk model: WDC WD40EZRX-22S 
Units: sectors of 1 * 512 = 512 bytes 
Sector size (logical/physical): 512 bytes / 4096 bytes 
I/O size (minimum/optimal): 4096 bytes / 4096 bytes 
Disklabel type: gpt 
Disk identifier: 27C8C52A-8091-403C-ADF1-E9C791667D40 

Device          Start        End    Sectors  Size Type 
/dev/sda1    67119104  134223871   67104768   32G Linux filesystem 
/dev/sda2       16384   67119103   67102720   32G Linux filesystem 
/dev/sda3  7757789184 7814037134   56247951 26.8G Linux filesystem 
/dev/sda4   134223872 7757789183 7623565312  3.6T Linux filesystem 

Partition table entries are not in disk order. 
erlangen:~ # smartctl -i /dev/sda 
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.9.8-2-default] (SUSE RPM) 
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org 

=== START OF INFORMATION SECTION === 
Model Family:     Western Digital Green 
Device Model:     WDC WD40EZRX-22SPEB0 
Serial Number:    WD-WCC4E2FYXSNV 
LU WWN Device Id: 5 0014ee 262d2e71e 
Firmware Version: 80.00A80 
User Capacity:    4,000,787,030,016 bytes [4.00 TB] 
Sector Sizes:     512 bytes logical, 4096 bytes physical 
Rotation Rate:    5400 rpm 
Device is:        In smartctl database [for details use: -P show] 
ATA Version is:   ACS-2 (minor revision not indicated) 
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) 
Local Time is:    Mon Nov 23 19:59:18 2020 CET 
SMART support is: Available - device has SMART capability. 
SMART support is: Enabled 

erlangen:~ # smartctl -A /dev/sda    
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.9.8-2-default] (SUSE RPM) 
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org 

=== START OF READ SMART DATA SECTION === 
SMART Attributes Data Structure revision number: 16 
Vendor Specific SMART Attributes with Thresholds: 
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE 
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0 
  3 Spin_Up_Time            0x0027   180   175   021    Pre-fail  Always       -       7958 
  4 Start_Stop_Count        0x0032   095   095   000    Old_age   Always       -       5931 
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0 
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0 
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       11469 
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0 
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0 
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       1874 
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       51 
193 Load_Cycle_Count        0x0032   199   199   000    Old_age   Always       -       5910 
194 Temperature_Celsius     0x0022   123   110   000    Old_age   Always       -       29 
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0 
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0 
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0 
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0 
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       1 

erlangen:~ #

Thanks for the input! Still undecided and leaning both ways to be honest. I’ll likely make a decision tonight… it’s possible I’ll go with the Seagate given my experience with them has always been okay.

In terms of reliability I understand both are the same for the most part: I shouldn’t expect one to fail more than the other, even if many reviews I read painted WD as being more reliable. From my personal experience, Seagate is indeed the more stable one; I had a Western Digital HDD 15 years ago and remember it failed and had to be replaced… my current Seagate however never suffered any data loss in 10 years of use and still works 100% fine as far as I’m aware, which I find remarkable.

The other point is the technology between the two models: The Seagate has 256 MB of cache, however it needs that because it’s a SMR drive which I understand is slower thus it needs the cache… the WD one has only 64 MB of cache, however it’s CMR. I understand this only affects write speeds though; I don’t plan to write to it consistently and expect phenomenal speeds at it, therefore I don’t really care… the only intensive writing will be when I use rsync to migrate all my data from the old drive, where my only expectation is that it finishes in one night so no more than 8 hours (almost 1.7 TB to copy but it will be from a SATA3 HDD to another SATA3 HDD).

Otherwise I’m well aware any drive can fail, and always keep backups of all important things. When having so much data on your drive though it’s hard to backup it all, some of it would be lost not to mention your time migrating back and forth in case you need a replacement. My new PC will further take care of the drive though, as it’s using an SSD as the home partition for application data to work with… this HDD is only used to archive big stuff in a reliable way, stuff that doesn’t need to be accessed at lightning transfer rates but must be kept safe instead (it’s like a second home partition).

For a slightly higher price ($10CDN) I would go with a Seagate Ironwolf. See the following article comparing the Barracuda and Ironwolf. Basically your getting a 24/7 rated drive with a better warranty (I have dealt with warranty issues several times over the years with Seagate and it has been a smooth process), extended lifespan, retaining comparable performance characteristics.

https://www.vueville.com/home-security/cctv/nvr/seagate-ironwolf-vs-barracuda-hard-drives-compared/

I bought 3 4TB Barracuda’s in 2018. One failed within 3 months and the original two plus the replacement have been in a NAS without any issues.

Yet another update: I’m putting my decision on hold for the night after finding out that WD Blue drives have what seems to be called the “head parking issue”. Essentially the head will move to an idle position whenever there are 8 seconds of inactivity, and that movement is believed to wear it out to the point where it officially only lasts for 6 months! You need a DOS tool to hack the HDD and set the parameter to 300 seconds… and even that no longer works for the Blue drives, it only did for Green. Yeah… no.

I’m looking at a Toshiba drive that might actually be a good idea to go with. What are your experiences with the HDWE140UZSVA?

As recommended, I’m looking at the Ironwolf 4TB version of the Seagate. It’s about 25% more expensive in the stores I can buy it from compared to the Barracuda, making it a very tough choice to even consider (mother will not be happy). But just so I know in case I could have it as an option, same questions: Is it non-SMR, does it have any wear-inducing issues like that head parking problem, and will it work well as a single ext4 partition?

Edit: Seeing reviews that suggest the Ironwolf model has the same aggressive head parking issue that’s likely to wear it out. Is there really no hope for buying a reliable HDD at this day and age? Will wait for your thoughts on this.

This gives the cmr / smr for Seagate
https://www.seagate.com/ca/en/internal-hard-drives/cmr-smr-list/

This is a manual for the Pro version. Page 14 shows the default head parking is 2 minutes; I believe the non pro is 10 minutes. They can be set.
https://www.seagate.com/www-content/product-content/ironwolf/en-us/docs/100835984b.pdf

Thanks. 2 minutes is probably fine, not to mention 10: The WD Blue has a ridiculously low 8 seconds, which everyone agrees is by design so that unknowing people will buy them then have them break in a few months and buy another… thank goodness I look carefully at those things, would be good if everyone knew or had the time to.

I might go with an Ironwolf then, especially since I trust Seagate more at the end of the day. Apart from some folks mentioning they can heat up, and that some versions come without the holes in the middle so it might be tricky to install them in certain computer cases… those do seem okay. I don’t want to risk any crash so reliability comes first… Ironwolf has a bit of extra RPM too so hopefully that also means better performance as a bonus. They’re intended for NAS not desktop PC’s, but if they work the same way I don’t see an issue with that.

Head parking is no longer an issue. I switched if off on my WDC WD40EZRX-22S.

Stay with PMR and avoid SMR

hd-idle puts the disk to sleep:

3400G:~ # systemctl cat hd-idle.service 
# /usr/lib/systemd/system/hd-idle.service
[Unit]
Description=hd-idle disk spindown service

[Service]
EnvironmentFile=-/etc/default/hd-idle
ExecStart=/usr/sbin/hd-idle -n $HD_IDLE_OPTS

[Install]
WantedBy=local-fs.target
3400G:~ # 

3400G:~ # cat /etc/default/hd-idle
# hd-idle command line options
# Options are:
#  -a <name>               Set device name of disks for subsequent idle-time
#                          parameters (-i). This parameter is optional in the
#                          sense that there's a default entry for all disks
#                          which are not named otherwise by using this
#                          parameter. This can also be a symlink
#                          (e.g. /dev/disk/by-uuid/...)
#  -i <idle_time>          Idle time in seconds.
#
# Options not exactly useful here:
#  -t <disk>               Spin-down the specfified disk immediately and exit.
#  -d                      Debug mode. This will prevent hd-idle from
#                          becoming a daemon and print debugging info to
#                          stdout/stderr
#  -h                      Print usage information.
#  -l <logfile>            Name of logfile (written only after a disk has spun
#                          up). Please note that this option might cause the
#                          disk which holds the logfile to spin up just because
#                          another disk had some activity. This option should
#                          not be used on systems with more than one disk
#                          except for tuning purposes. On single-disk systems,
#                          this option should not cause any additional spinups.
#
# spin down all disks after 180 seconds
#HD_IDLE_OPTS="-i 180"
#
# only spin down /dev/sdb after 180 seconds
**HD_IDLE_OPTS="-i 0 -a sdc -i 300"**
3400G:~ # 

3400G:~ # journalctl -b -u hd-idle.service -o short-monotonic 
-- Logs begin at Sat 2020-11-21 21:31:01 CET, end at Tue 2020-11-24 05:18:56 CET. --
    4.784032] 3400G systemd[1]: Started hd-idle disk spindown service.
    4.786142] 3400G hd-idle[545]: hd-idle starting in nodaemon mode
    4.786142] 3400G hd-idle[545]:   disk: sdc timeout: 300
    4.786142] 3400G hd-idle[545]:   default timeout: 0
  484.793432] 3400G hd-idle[545]: spindown: sdc
 2915.411886] 3400G hd-idle[545]: spinup: sdc
 3335.417186] 3400G hd-idle[545]: spindown: sdc
3400G:~ # 

Thanks for all the tips and advice! I went ahead and ordered the Seagate Ironwolf 4TB (ST4000VN008). It was the best option as I’ve only had positive experiences with SG so far, and this NAS intended drive is meant to be resilient. The desktop versions (Barracuda for SG and Blue for WD) seem to be designed to break or work badly just so you buy new ones… sad to see that PC users are considered second tier trash as the general rule, though for a high-end PC I think a NAS oriented drive is more fitting anyway.

Here’s a bonus (horror) story from yesterday in the meantime: Just as I ordered the new HDD, I noticed my PC became very slow and would barely work or open directories any more. Upon restarting it spent a long time in POST then refused to boot; fsck was trying to fix the hard drive I’m replacing, complaints about bad sectors everywhere. Initially I thought some bug in a distro update caused a broken process to slow it down… as this happened I jumped to thinking my old drive suddenly failed right before I was able to replace it. I only then noticed the HDD was making clicking noises every few seconds, which at first I confused with it being busy and working. Before this began I also noticed that some images I saved from Firefox were corrupted files, but once more assumed it was a bug in FF… the computer also restarted on its own before I saw that, which I found very odd but didn’t pay much mind to at first.

Thank goodness however that my old HDD is perfectly fine. The reason for this… was the measly SATA3 cable breaking, at least from what I could deduce (I didn’t plug another HDD into it to check and risk breaking stuff). Once I replaced it everything was back to normal… in fact the system seems to be a bit faster now, meaning something was going wrong with it for ages but never caused any obvious issues! Glad I noticed this with the old drive and didn’t plug the arriving one into the bad cable. This is partly my fault as I was using a bent design which had the neck pretty strangled, that’s most likely what caused it… it’s been there for a long time and I simply didn’t have any reason to pay much mind to it before.

Moral of the story: Don’t twist or pull on your cables nor leave them in positions where they could get damaged, including (but not limited to) SATA ones. Also Linux handled the problem very nicely: Once I plugged the drive back in on a working cable, fsck did a timely check on the partition during the next startup (no issues found luckily) then everything recovered. The drive is also cool for surviving what just happened, given the bad cable was causing it to break so badly it was clicking non-stop.

Beware of defective hardware: Onboard LAN disabled? - Hardware - openSUSE Forums Testing a broken USB HDD caused onboard LAN to stop working. Even removing the battery on the main board and clearing CMOS would not bring back onboard LAN. The HDD failed on the evening the 2 year warranty ended (22:30h) and I submitted a RMA in time.

Recommend you thumb through various Backblaze articles over the years…
I don’t remember when they published their first article (maybe 8 years ago?) but it was so wildly popular, they’ve been writing annual reports.
As a major vendor in a co-lo, they go through hundreds of drives all the time and they needed to determine for themselves reliability rates for all the drive vendors and models they use… and then thought the world would like to know the same information.

Some trends were obvious in the beginning and became consistent over time…
Keep in mind that although the drives a colo uses are usually a higher grade than the consumer drives the common public uses, the trends are still fairly reliable because a lot of time the same technology goes into all drives… But as each manufacturing lot is produced, samples are taken and based on those results are sold either as consumer or enterprise.

In general, HGST (Hitachi) is head and shoulders above everyone else by quite a margin. It can be worthwhile to pay the premium expected for these drives even used because they might be expected to last years longer than any other vendor.

Interestingly, although WD and Hitachi merged as one company (uner the WD name), the manufacturing and engineering for both remained completely separate. But, you’d like to think that a few secrets might have leaked from the HGST side to WD now and then. So, unfortunately WD drives can be nice but consistently don’t have the same stats as HGST.

Toshiba purchased a factory from HGST awhile back and for that one year, drives from that factory under the Toshiba name had the same reliability as HGST… But I haven’t heard how recent Toshiba stats are.

Seagate unfortunately has had a pretty tough time compared to the other vendors. Although there have been times when their stats have improved, they’ve pretty consistently been in last place. My only personal experience with a Seagate was OK… It developed plenty of bad blocks quickly but after that remained stable. Somewhere, I’m still using it… After watching the bad block count stay the same for 3 years, I’ve stopped watching it like a hawk.

And of course, look for news about bad factory runs. I haven’t heard of any recently but maybe 6 years ago WD had a couple bad runs. Everything else was OK but if you’re buying the grey market or used, you have to be very aware of drives which should have been recalled but aren’t.

Here’s Backblaze’s 2020 report

https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2020/

TSU

While I wait for the new drive to arrive, I’d like to ask one last important question: Once I format it (ext4) what command should I use to do an in-depth scan and check that there are no bad sectors and the drive arrived in order? I’m assuming fsck will do the trick, but what are the parameters for a full scan… or is there another Linux tool / command that’s better suited? Hopefully it won’t take more than a few hours to do a thorough test, though for 4 TB I can imagine it might be a while.

I would hope that no defects are found with your new storage device. Consider testing with badblocks if you feel the need…

https://wiki.archlinux.org/index.php/badblocks

When you want to test for defective blocks/tracks, yo do that first before you start using it (by partitioning it and creating file systems upon it or on the partitions). fsck has nothing to do with that. It checks the integrety of the file system, not of the disk hardware.

Thanks! It shouldn’t be an emergency at the end of the day, I imagine the drive won’t come broken and even if it did I’d likely notice when I first plug it in. Badblocks seems like what I’d want if I’ll go for a test… I can’t seem to find it anywhere in the repos however.

https://software.opensuse.org/search?utf8=✓&baseproject=ALL&q=badblocks

[EDIT] Sorry it’s part of e2fs, read that too in a hurry. The command exists so I already have it available.

I may be late but my experience with HDDs:
6/6 2.5" Firecuda 2GB defective
3/6 2.5" Barracuda 2GB defective
0/4 2.5" WD Blue 2GB defective.

I would never recommend Seagate. I mean WD is typically more expensive but I learned my lesson(s) with Firecudas/Barracudas.