Requirement:
To transfer 1.8TB of data from my 2TB USB 3.0 external HDD (ntfs) to my new 4 TB External USB 3.0 HDD (ext.4)
Estimated time:
In a perfect world using USB 3.0: 45 minutes
In the real world using a Western Digital 2TB My Book: 5 hours minimum
Using Dolphin, Midnight Commander or any other file manager on top of KDE: 2 Days or more
I never actually finished that process but after 12 hours I had only transferred 25% of the 1.8 TB. After researching the Web I found this is a common problem with Linux that has no real solution. Microsoft out performs Linux on this task, but I formatted my new drive in Ext.4.
Problem:
I/O crashes on the USB bus continually causes the applications transferring the files to reset/hold/wait for bus access. Some of these processes are; usb storage, mount.ntfs, rsync, and kio_file. When an I/O crash occurs you can see it in System Monitor as “disk sleep”. Which simple means that program is pausing until the USB Bus is available, during that time no files are being transferred. This is not a problem when transferring files that are 100MB or more. But if you are trying to copy thousands of 1MB picture files, it is a big problem. I found no solution to this problem. Changing HDD parameters like Power Management and Look Ahead have no effect. I suspect the HDD manufactures have already determined the best settings for most bus systems. I suspect the problem is a FIFO buffer that is too small. With modern computers a 100MB FIFO buffer can easily be set up in RAM to transfer all those small files in one big chunk avoiding the USB Bus I/O crash problem caused by rapid “sharing” of the bus between multiple programs. I have see this before when I was designing high speed data transfer systems for analysing IEEE1394 Bus protocol stacks.
A Simple Work Around:
Don’t use a file manager on top of KDE. Use rsync instead. Install Grsync using Yast. It is simple, strait forward GUI for rsync and has many options. There are still USB I/O Bus crashes but it seems to resolve them much faster then the high level file managers.
P.S.
I am certain there are many ways to skin this cat and I encourage everyone to post their solutions to this problem here so other users don’t waste their time transferring huge amounts of data files.
I hope other SUSE Linux users find this posting helpful. Cheers
I’ve also experienced problems using Windows too so whether Windows is better at throttling to prevent buffer overflow or not, it’s not not perfect.
As you have described, it’s a not uncommon problem when copying files.
And maybe the solution might be less obvious, if the problem is the ordinary file move or copy operation, then use some other method, and there are several.
Disk Block utilities like rsync which you’ve identified.
Other disk block based apps might be more efficient, so for instance if you can create or use a partition exclusively for disk transfers, then any disk/partition cloning app will do what you want, like Clonezilla.
There are apps that are specifically designed to transfer enormous amounts of data, like torrent. Large files are re-described as small, digestable “chunks” of data which can be transferred without a problem. Setting up your own P2P is very simple, practically all torrent apps today are also able to create torrent files (the files that describe the chunks).
If you’re transferring over a network connection, there are additional issues you may want to address depending on your network usage (congestion), size of your TCP/IP buffers, more.
Remember that Windows FS are not native Linux. They use reverse engineered algorithms and are thus not as efficient as the real thing. Also USB sticks use static memory which has it own quirks
Still not as good as real Windows with Windows based file systems. Windows does not do Linux at all unless you add a program to do such and that is most inefficient.
Transfers from 1 USB device to another USB device has many potential problems add in non-Linux file system and maybe a network and you got a real mess :P.
For pure speed it may be best to transfer to local Linux file system then back out to the second device.
Using luckybackup (rsync) I backed about a gig of data in about 5 min (USB 2) from my home but it took about another 5 min for the cores to settle down apparently still writing to the USB’s. There are issues with flash memory and deleted blocks on USB sticks this may require CPU time to deal with
On 2015-07-20 17:36, tsu2 wrote:
> Also, AFAIK current Linux support for NTFS is as a kernel service with
> pretty good performance, not like when it was running in FUSE some years
> ago.
No, ntfs-3g still uses fuse. See:
minas-tirith:~ # mount | grep win
/dev/sda2 on /windows/C type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
minas-tirith:~ #
That’s on 13.1, but I doubt 13.2 is different :-?
And specially writing to it is CPU intensive in Linux. It should not be an issue with current CPUs, but still it is much higher than with other filesystems.
On 2015-07-20 17:26, tsu2 wrote:
>
> I’ve also experienced problems using Windows too so whether Windows is
> better at throttling to prevent buffer overflow or not, it’s not not
> perfect.
>
> As you have described, it’s a not uncommon problem when copying files.
> And maybe the solution might be less obvious, if the problem is the
> ordinary file move or copy operation, then use some other method, and
> there are several.
But he is using USB3. Isn’t its support still flaky in Linux?
Via USB2, I have not seen issues, others than USB2 being slow, and flash
being even slower.
You’re right.
Somewhere I thought I read that the ntfs kernel module was rejuvenated but now I see no evidence of that.
So, it’s still ntfs-3g which is a fuse fs.