On 02/24/2014 12:56 PM, gogalthorp wrote:
> Not on SSD
> Your HD looks ok so it is not that. ie no bad sectors
> It really looks like a HD problem but the HD looks OK. I’m stumped
This one has me stumped too.
If we cannot figure out what is causing the long wait times, we should be able
to cut the total overall time.
The standard tools such as cp, mc, etc. use a relatively small block size when
reading/writing files. When both files are on the same HD, this causes lots of
overhead due to head repositioning and waiting for the correct sector to arrive
at the heads. The caching process helps, but it is overwhelmed when large files
are involved. Increasing the block size to 1 MiB, or bigger, will reduce the
number of operations by a lot. When you copy a file, try the following:
time dd if=<input_file_path> of=<output_file_name> bs=1M
The “time” at the beginning is optional, but it will list the elapsed, user, and
system time for the operation. That will show you the difference that changing
“bs” makes. I would try 1M, 2M, and 4M.
For a 400 MB file, I get the following:
finger@larrylap:~> sudo time dd if=junk.iso of=/root/junk.iso bs=1M
391+1 records in
391+1 records out
410421248 bytes (410 MB) copied, 12.4206 s, 33.0 MB/s
0.00user 4.38system 0:12.43elapsed 35%CPU (0avgtext+0avgdata 1860maxresident)k
801648inputs+801608outputs (1major+511minor)pagefaults 0swaps
When you try the differing block sizes, you do have to be careful that neither
the input or output files are cached. That will show as differing inputs and
outputs. Try on different files, on different days, or after a reboot. The page
fault and swap counts will also be interesting.
Finally, be very careful with the dd command. It is very easy to destroy a file
system with it.