How to zero fill aka null an external NVME drive?

Linux Tumbleweed. I want to zero fill aka null one of my external 2TB USB-C connected nvmes. How to do this? I don’t want to use a live USB. I want to do this with my laptop booted up so I can do my other work while the zero fill runs in the background. You can recommend an app or a terminal command.

I’m an average home computer user, no server, no NAS, no RAID, no home network. Only a single laptop running OpenSuse Tumbleweed.

I want to do this because the nvme is running very slow when I run a Clonezilla clone to it. Used to take 2 hours now takes 10 hrs. Someone told me it may be because nvmes need to be zero filled or nulled once in a while.

No. there is no need and never been a need to fill any drive with zero’s. It is sometimes done the privately erase things but also writing zero’s is “just easy” overwriting things with random data would also make sense.

If you want to start over overwrite the file system you want to clean up with a new file system, that can ne done in the Yast Partitioner.

2 Likes

They told me that an SSD memory cell needs to be zero before it can be written to. Hence nulling an nvme drive before backup will speed things up. This sound right?

I’m a newbie so I have no idea. I’m not an ssd geek.

On a related note, some people suggested that I enable TRIM on my external nvmes. Or confirm that TRIM is already enabled. Know an easy way to do that? I’ve researched it and it looks kinda complicated.

@invalid_user_name who are ‘They’? I’ve never needed to do that… sounds more like an i/o and caching issue on the local system, you would need to run iotop to see.

For trim, depends on the NVMe device, some are blacklisted… It should be running the fstrim timer and service, check the status systemctl status fstrim.service

2 Likes

If you’re selling the SSD, or if you’re going to throw it away, then yes, it makes sense to zero out its content.

Otherwise, it’s a waste of time and effort. And you’re not going to realize any performance benefits doing so - there are no moving parts.

SSD Performance is dependent upon make and model, interface, and its use case.

1 Like

Yes, that’s true. But I’m trying to figure out why my nvme has gone from 2hrs to 10hrs for a Clonezilla clone. During this time the data set being backed up has gotten smaller. So something unusual is going on to cause such a big slowdown. Any ideas? The external nvme is 2TB and the drive being cloned has only about 1.1TB of data total.

I’ll research this. Haven’t heard of this before.

Would I need to modify this command so it applies to the external drive?

@invalid_user_name if it’s in /etc/fstab, then no. Else as root user just run fstrim -v /dev/... whatever ... is for the device from the output of lsblk when plugged in.

So, just some observations, your using clonezilla, what do you hope to achieve with this? If it takes X hours to clone, then X hours to restore, versus how long a re-install takes, I’m sure a fresh install is way quicker.

Have you tested the image works, as in wipe the drive (or insert a new one for testing) and see how long it all takes?

There are also tools like autoYaST to automate an install…

1 Like

I wouldn’t bet this is about the full system “/” :

I was just about to say that delays can also come from the source not just from the target drive. However, I don’t think it’s a wise idea to try and clone a running OS? If that’s the case, I’m pretty sure that there’s a lot of possible causes for delay or failure.
If it’s rather a data backup, why not use an incremental method as unison or rsync?

2 Likes

I ran the fstrim -v /dev/… command and found out that the enclosure for my external USB-C connected nvme drive does not support fstrim. So I’ll start hunting for a new enclosure with the TRIM feature. So this thread will get paused until that’s done. I’ll report back.

@malcolmlewis Thanks for the tip about the fstrim command.

@invalid_user_name well it could be the NVMe device, which is?

1 Like

Sure, here’s the details. Soon as I get back from lunch I’ll google if this drive supports TRIM and discardblk.

advait@localhost:~> sudo smartctl -a /dev/sda
[sudo] password for root: 
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.11-1-default] (SUSE RPM)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       KINGSTON SNVS2000GB
Serial Number:                      50026B7684EBFED4
Firmware Version:                   S8442101
PCI Vendor/Subsystem ID:            0x2646
IEEE OUI Identifier:                0x0026b7
Controller ID:                      1
NVMe Version:                       1.3
Number of Namespaces:               1
Namespace 1 Size/Capacity:          2,000,398,934,016 [2.00 TB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            0026b7 684ebfed45
Local Time is:                      Thu Jan 25 11:53:36 2024 IST
Firmware Updates (0x12):            1 Slot, no Reset required
Optional Admin Commands (0x0016):   Format Frmw_DL Self_Test
Optional NVM Commands (0x005f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x03):         S/H_per_NS Cmd_Eff_Lg
Maximum Data Transfer Size:         64 Pages
Warning  Comp. Temp. Threshold:     85 Celsius
Critical Comp. Temp. Threshold:     90 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     6.00W       -        -    0  0  0  0        0       0
 1 +     3.00W       -        -    1  1  1  1        0       0
 2 +     1.50W       -        -    2  2  2  2        0       0
 3 -   0.0250W       -        -    3  3  3  3     8000    3000
 4 -   0.0040W       -        -    4  4  4  4    25000   25000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        32 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    8%
Data Units Read:                    22,550,196 [11.5 TB]
Data Units Written:                 219,311,282 [112 TB]
Host Read Commands:                 179,917,210
Host Write Commands:                1,584,669,856
Controller Busy Time:               5,959
Power Cycles:                       118
Power On Hours:                     162
Unsafe Shutdowns:                   78
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0

Warning: NVMe Get Log truncated to 0x200 bytes, 0x200 bytes zero filled
Error Information (NVMe Log 0x01, 16 of 64 entries)
No Errors Logged

Warning: NVMe Get Log truncated to 0x200 bytes, 0x034 bytes zero filled
Self-test Log (NVMe Log 0x06)
Self-test status: No self-test in progress
No Self-tests Logged

advait@localhost:~> ^C
advait@localhost:~> 


Since it is a NVMe SSD you should use blkdiscard to erase it. If you want to write zero on blocks just add option -z (but the ssd must support the operation and your seems to do with “Wr_Zero” NVM command).

have a look at the man page of “blkdiscard” or google for more details.

2 Likes

Looks like formatting the external nvme drive solved the slow job time. I formatted the drive to ext4 then I did a clone job that took 1hr 40min. Before the format the job was taking about 11hrs. Looks like issue resolved.

Two things about this backup method:

  1. Constantly formatting an SSD drive is likely to increase the wear on it.
  2. Between he time you format the drive and the time the cloning is done, you have no backup, so you’ve created a period of risk of data loss (unless you’re using more than one backup).

I would encourage you to look into rsync - it’s very easy to use and can be scheduled with a cron job. I use rsync -avz [source]/ [destination] (replacing [source] and [destination] with the paths for the source and destination, respectively), and that creates a backup without removing deleted files (so it’s more ‘cumulative’), and if the target drives were SSDs, it wouldn’t cause as much wear on the drive because it’s only copying over new and changed files after the first run.

1 Like

I completely agree on the use of rsync, as @hendersj details some positive points. We use rsync to backup all our home machines. After the initial backup, future backups go pretty quick. rsync has been around forever, is reliable, and feature-rich (just read thru the man-page to see).

If you read the “Limitations” section at CloneZilla website, those, to me, are definite negatives. I’d use CloneZilla for system deployment, but not for a personal home backup tool. Do you need just one, or twenty files, from the backup? Sorry, no go.

2 Likes

Good point. My plan is to track the time it takes to make a clone. And when that time gets too slow, only then I’ll do a format. The clone time was fine previously for about 2 years. Only after 2 years of cloning every other day did the clone time get really slow. So 2 years between formats should be OK in terms of wear on the nvme.

@invalid_user_name That depends on the drive capabilities as in data written, which according to your output is likely way exceeded… 162 hours and some 112TB written (~700Gb an hour). You need to look at the Manufacturers specs… 2000GB – 480TBW - Total Bytes Written (TBW) is derived from the JEDEC Client Workload (JESD219A).

1 Like