Linux Tumbleweed. I want to zero fill aka null one of my external 2TB USB-C connected nvmes. How to do this? I don’t want to use a live USB. I want to do this with my laptop booted up so I can do my other work while the zero fill runs in the background. You can recommend an app or a terminal command.
I’m an average home computer user, no server, no NAS, no RAID, no home network. Only a single laptop running OpenSuse Tumbleweed.
I want to do this because the nvme is running very slow when I run a Clonezilla clone to it. Used to take 2 hours now takes 10 hrs. Someone told me it may be because nvmes need to be zero filled or nulled once in a while.
No. there is no need and never been a need to fill any drive with zero’s. It is sometimes done the privately erase things but also writing zero’s is “just easy” overwriting things with random data would also make sense.
If you want to start over overwrite the file system you want to clean up with a new file system, that can ne done in the Yast Partitioner.
They told me that an SSD memory cell needs to be zero before it can be written to. Hence nulling an nvme drive before backup will speed things up. This sound right?
I’m a newbie so I have no idea. I’m not an ssd geek.
On a related note, some people suggested that I enable TRIM on my external nvmes. Or confirm that TRIM is already enabled. Know an easy way to do that? I’ve researched it and it looks kinda complicated.
@invalid_user_name who are ‘They’? I’ve never needed to do that… sounds more like an i/o and caching issue on the local system, you would need to run iotop to see.
For trim, depends on the NVMe device, some are blacklisted… It should be running the fstrim timer and service, check the status systemctl status fstrim.service
Yes, that’s true. But I’m trying to figure out why my nvme has gone from 2hrs to 10hrs for a Clonezilla clone. During this time the data set being backed up has gotten smaller. So something unusual is going on to cause such a big slowdown. Any ideas? The external nvme is 2TB and the drive being cloned has only about 1.1TB of data total.
@invalid_user_name if it’s in /etc/fstab, then no. Else as root user just run fstrim -v /dev/... whatever ... is for the device from the output of lsblk when plugged in.
So, just some observations, your using clonezilla, what do you hope to achieve with this? If it takes X hours to clone, then X hours to restore, versus how long a re-install takes, I’m sure a fresh install is way quicker.
Have you tested the image works, as in wipe the drive (or insert a new one for testing) and see how long it all takes?
There are also tools like autoYaST to automate an install…
I wouldn’t bet this is about the full system “/” :
I was just about to say that delays can also come from the source not just from the target drive. However, I don’t think it’s a wise idea to try and clone a running OS? If that’s the case, I’m pretty sure that there’s a lot of possible causes for delay or failure.
If it’s rather a data backup, why not use an incremental method as unison or rsync?
I ran the fstrim -v /dev/… command and found out that the enclosure for my external USB-C connected nvme drive does not support fstrim. So I’ll start hunting for a new enclosure with the TRIM feature. So this thread will get paused until that’s done. I’ll report back.
@malcolmlewis Thanks for the tip about the fstrim command.
Since it is a NVMe SSD you should use blkdiscard to erase it. If you want to write zero on blocks just add option -z (but the ssd must support the operation and your seems to do with “Wr_Zero” NVM command).
have a look at the man page of “blkdiscard” or google for more details.
Looks like formatting the external nvme drive solved the slow job time. I formatted the drive to ext4 then I did a clone job that took 1hr 40min. Before the format the job was taking about 11hrs. Looks like issue resolved.
Constantly formatting an SSD drive is likely to increase the wear on it.
Between he time you format the drive and the time the cloning is done, you have no backup, so you’ve created a period of risk of data loss (unless you’re using more than one backup).
I would encourage you to look into rsync - it’s very easy to use and can be scheduled with a cron job. I use rsync -avz [source]/ [destination] (replacing [source] and [destination] with the paths for the source and destination, respectively), and that creates a backup without removing deleted files (so it’s more ‘cumulative’), and if the target drives were SSDs, it wouldn’t cause as much wear on the drive because it’s only copying over new and changed files after the first run.
I completely agree on the use of rsync, as @hendersj details some positive points. We use rsync to backup all our home machines. After the initial backup, future backups go pretty quick. rsync has been around forever, is reliable, and feature-rich (just read thru the man-page to see).
If you read the “Limitations” section at CloneZilla website, those, to me, are definite negatives. I’d use CloneZilla for system deployment, but not for a personal home backup tool. Do you need just one, or twenty files, from the backup? Sorry, no go.
Good point. My plan is to track the time it takes to make a clone. And when that time gets too slow, only then I’ll do a format. The clone time was fine previously for about 2 years. Only after 2 years of cloning every other day did the clone time get really slow. So 2 years between formats should be OK in terms of wear on the nvme.
@invalid_user_name That depends on the drive capabilities as in data written, which according to your output is likely way exceeded… 162 hours and some 112TB written (~700Gb an hour). You need to look at the Manufacturers specs… 2000GB – 480TBW - Total Bytes Written (TBW) is derived from the JEDEC Client Workload (JESD219A).