Possible HD problem

openSUSE 13.1 - Text mode

I just finished a long drive to drive (ide) copy using “cp”. On the next boot the receiving drive did not identify itself correctly in the BIOS. Several more boots produced similar results. The ribbon data cable looked suspicious, so I replaced it and the drive acted normal and checked out with “smartctl”. Should I worry? Perhaps a better question might be does “cp” error check during operation?

Thanks in advance.

As far as I know, cp always was working on one of the Unix basic designs: a file is a stream of bytes. And thus cp basicaly reads all the bytes from it’s input file and writes them to it’s ouput file. Of course there is buffering involved, but that is what it does. I am not aware of any whistles and bells like checksuming added to the concept and I do not find anything about it in the man page, but maybe the info pages tel more.

Again, it would be a bit contrary to the Unix concept of offering many tools for small tasks that can be combined at will. When you want to check afterwards, do it yourself. The diff tool is available. Or do the two md5checksums and compare them. The Unix idea is that when you want to do such a sequence of commands on a regular base in the future, then you create a script.

On 2014-07-29 22:06, hcvv wrote:

> As far as I know, cp always was working on one of the Unix basic
> designs: a file is a stream of bytes. And thus cp basicaly reads all the
> bytes from it’s input file and writes them to it’s ouput file. Of course
> there is buffering involved, but that is what it does. I am not aware of
> any whistles and bells like checksuming added to the concept and I do
> not find anything about it in the man page, but maybe the info pages tel
> more.

It could be an option, and it would be useful, IMO.

MsDOS had it, as a systemwide setting. All file write operations were
verified, if this switch was enabled (in config.sys, IIRC).

On Linux, I use "rsync -c … " for that purpose.

I just found today a big file that is many megabytes of zeros, instead
of the expected data. Now I have to go back and find out if all copies
are the same, or the error was prior to me getting the file. With forced
verification I would have no doubts.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

For my regular backups I have been using “rsync” ever since “hcvv” recommended it some time ago, but for this rather large simple copy I thought that having to checksum every file would make the process incredibly long, especially as it was on an older machine with one drive being on a USB port.

As I monitored the progress with periodic “df” invocations on both drives, I noticed that at one point progress had slowed down considerably since the last check, and that’s what made me suspect a problem. I thought that it might be helpful if let’s say repeated identical seeks on a drive might signal a warning flag. Or reads for that matter.

I remembered the old MS-DOS copy you mentioned and that’s what started my question. Of course in those days file size was measured in kilobytes. Last year I finally decomissioned a Seagate 225 - 20 megabyte drive (note MEGABYTES) that had been in operation since mid 1980’s without any problems. Of course it was running on a 6809 processor box with 512 Kbytes memory. Sorry, my nostalgia creeps in!

On 2014-07-30 15:46, ionmich wrote:
>
> For my regular backups I have been using “rsync” ever since “hcvv”
> recommended it some time ago, but for this rather large simple copy I
> thought that having to checksum every file would make the process
> incredibly long, especially as it was on an older machine with one drive
> being on a USB port.

I have seen talk on the XFS filesystem mail list regarding keeping a
checksum of files as part of the metadata. I think they are thinking
along those lines, write verification.

I might have misunderstood them, though.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))