Slow USB3 harddrive slow performance

Hi,

I have a USB 3 harddrive connected to USB 3 port of the computer running 13.1. I did bendmark with Bonnie++ and it looks slow (screenshot is here).

I also did a simply copy and paste from external usb harddrive to internal harddrive, it took about 2 min to copy 8GB file. My expectation is at least 100MBps speed for USB3.

Any suggestions?

/S

I find it can depend on the USB-3.0 hardware compatibility with each other. I have a number of USB-3.0 devices. I have an old PC with a USB-3.0 PCI-e card that never obtained USB-3.0 speeds under any OS with any external USB-3.0 device. I tried two different USB_3.0 PCI-e cards and it made no difference. Neither gave the USB-3.0 speed (motherboard is an Asus P6T Deluxe V2).

I also have a Toshiba-Ultrabook (Z930) and a desktop PC using a gigabyte Z87X-D3H motherboard where both of these obtain varying USB-3.0 speeds with USB memory sticks and USB external drives. USB devices I have used are Lexar-USB-3.0 memory stick, Kingston HyperX USB memory stick, Sandisk Xtreme USB-3.0 memory stick and various Toshiba and Fantec external hard drives that have USB-3.0 interfaces. The Lexar only gets fast USB-2.0 speeds (~35MB/sec) but the other devices are typically from ~80 to 100 MB/sec transfer speeds. The Sandisk Xtreme USB-3.0 gave me the fastest speeds, slightly exceeding ~100 MB/sec. This was using the ‘dophin’ file manager, and possibly other apps for transferring files would have faired better.

I have no suggestions to help you improve the speed you have seen.

My guess is not all USB-3.0 hardware devices are compatible with each other, and that the USB-3.0 hardware implementation varies. … But thats a guess. I don’t really know.

On 2014-03-24 18:36, chinese ys wrote:

> I also did a simply copy and paste from external usb harddrive to
> internal harddrive, it took about 2 min to copy 8GB file.

That’s 68 MiB/s, which is not that bad. Are you sure the real hard disk
inside the external enclosure can do more? Maybe it is a 5200 rpm unit.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

Because of buffering and usually two hard drives involved, the speed of copying onto a hard drive is not straightforward to obtain.

Here’s how it can be measured for the easiest task - sustained writing.

Select a large file, say, 6 GB. If you do not have one, create with cat. The files involved shall not be effectively zippable, since the hard drive will compress the data before writing, and you will not get the “true” writing speed. While one may deal with easy-to-zip files, most of large files in practice are unzippable (meaning zipping them does not decrease the size noticeably), and it is more informative to measure writing speed for those files.

So, start with unzippable files a, b, c, d, my*.dat, or use /dev/urandom:

dd if=/dev/urandom of=./ra.dat bs=1M count=500
cat a b c d > e
cat ra.dat e my.dat my2.dat ..... > f
zip -r g.zip f

Achieve size of g.zip about 80% or your RAM. Let us say you have 8 GB of RAM, so let g.zip be 6 GB.

Next step is to load the file into RAM to exclude the source drive from affecting speed measurement.

Create a RAM partition:

mount -t tmpfs none /mnt -o size=6.5g

Copy the file into /mnt:

cp g.zip /mnt/

Now we are ready to measure the writing speed. To the time the cp takes, we must add file’s buffer flushing time, because it is again part of data moving operation. We start with a sync to flush all other buffers.

sync
time cp /mnt/g.zip /external/; time sync

Add up the two times to get the copying time.

When finished, don’t forget to unmount the /mnt to free the RAM.

It is much more difficult to do the measurement for smaller-size files. Better use a specialized application. The problem with application is, one doesn’t generally know how it works.

On 2014-03-27 09:36, ZStefan wrote:

> The files involved shall not be effectively zippable, since the hard
> drive will compress the data before writing,

What?

I’ve never seen such thing.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

Before the bit stream is written onto a disk (flash memory or magnetic platter), it usually undergoes about five types of transformations. One of them is zipping or similar compression done on a small scale (small data frame). The other one might be encryption.

This is done in firmware and is independent of compressed filesystems.

On 2014-03-27 14:16, ZStefan wrote:
>
> robin_listas;2633353 Wrote:
>> On 2014-03-27 09:36, ZStefan wrote:
>>
>>
>>> The files involved shall not be effectively zippable, since the hard
>>> drive will compress the data before writing,
>>
>> What?
>>
>
> Before the bit stream is written onto a disk (flash memory or magnetic
> platter), it undergoes about five types of transformations. One of
> them is zipping or similar compression done on a small scale (small
> data frame).
>
> This is done in firmware and is independent of compressed filesystems.

To what purpose?

Once the operating system decides to send an LBA block of 512 bytes, it
has to be stored there, and use exactly one sector. It does not matter
if it uses just 300 bytes of that sector, after compression, or 512, as
you still address it as a single LBA sector. You can not join several
sectors and eventually save one sector. And the disk hardware still
needs to read that entire LBA sector even if it is not filled
completely. This is done in a single op, as the entire sector flies
under the head.

I don’t believe they do any compression.

Do you have a reference paper from a manufacturer that is doing this?


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

Zip can compress any size file. But a file of random characters will not compress. Compression takes advantage of repitition and random data by definition has no repatition

I couldn’t find a paper on this. I have read it, or seen it as told on
a conference of hard drive recovery professionals.

By the way, I remember that the main topic of the conference was
“SSDs will kill the HD recovery business, because there is no recovery
of data from failed SSDs”.

This is not entirely true:

For fun, I zipped a file of data generated by /dev/urandom. The file was reduced in size by 0.015%.

Should have said no repeating sequence. ie high entropy

" “This is not entirely true:”
“random data by definition has no repetition.” "

Just joking…