External SATA drive not recognized

OpenSUSE 11.2 server, Gigabyte GA-MA790X-UD4P
sda for system, 3 ext4 partitions, working fine.
sdb promise RAID1 for data, 1 ext4 partition, working fine.
sdc is an eSATA docking station for data backup, 1 encrypted ext4 partition --** here lies the problem**.

This configuration has been functional for months until I decided to add two more external drives (sdc) to rotate through backups. I had difficulty with encyption on the first new drive and eventually decided to start over. Using the gui Yast Expert Partitioner, I deleted the single partition. That began a real nightmare…

Since deleting the partition, the system detects drives inserted in the docking station, but does not report them (including a different fully functional drive and a brand new unused drive). I have tested all drives on other computers and they function perfectly. I have rebooted the system several times while troubleshooting this issue.

Could not recreate the partition on server (since it does not recognize the drive), so I used Gparted on another computer - it all went without a hitch, formatted ext4. But when I placed the drive in the dock, the drive still was detected but not recognized.

**Details:

BIOS** lists the eSATA drive

Entering Yast Expert Partitioner, error message follows:

The partitioning on disk /dev/sdc is not readable by
the partitioning tool parted, which is used to change the
partition table.

You can use the partitions on disk /dev/sdc as they are.
You can format them and assign mount points to them, but you
cannot add, edit, resize, or remove partitions from that
disk with this tool.

**Yast partitioner **shows drives: sda, sda1, sda2, sda3, sdb, sdb1
sbc is not listed.

# fdisk sdc results in: Unable to open sdc

# dmesg | grep tail reports:
[48442.370779] sd 0:0:0:0: [sdc] Unhandled error code
[48442.370793] sd 0:0:0:0: [sdc] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[48442.370813] end_request: I/O error, dev sdc, sector 1465149160
*** repeats ***]

# hwinfo | grep sdc results in:

block.device = ‘/dev/sdc1’
linux.sysfs_path = ‘/sys/devices/pci0000:00/0000:00:04.0/0000:05:00.0/host0/target0:0:0/0:0:0:0/block/sdc/sdc1’
block.device = ‘/dev/sdc’
linux.sysfs_path = ‘/sys/devices/pci0000:00/0000:00:04.0/0000:05:00.0/host0/target0:0:0/0:0:0:0/block/sdc’
8 32 732574584 sdc
8 33 732572001 sdc1
sdc
sdc1
[0:0:0:0] disk /dev/sdc
block: name = sdc, path = /class/block/sdc
>> block.5: /dev/sdc
>> block.5.1: /dev/sdc geo
dev = /dev/sdc, fd = 4
/dev/sdc: ioctl(geo) ok
/dev/sdc: ioctl(block size) ok
/dev/sdc: ioctl(disk size) ok
>> block.5.2: /dev/sdc serial
block: name = sdc1, path = /class/block/sdc1
>> int.4.3: /dev/sdc
read_block0: read error(/dev/sdc, 0, 512): errno 5
P: /devices/pci0000:00/0000:00:04.0/0000:05:00.0/host0/target0:0:0/0:0:0:0/block/sdc
N: sdc
E: DEVPATH=/devices/pci0000:00/0000:00:04.0/0000:05:00.0/host0/target0:0:0/0:0:0:0/block/sdc
E: DEVNAME=/dev/sdc
P: /devices/pci0000:00/0000:00:04.0/0000:05:00.0/host0/target0:0:0/0:0:0:0/block/sdc/sdc1
N: sdc1
E: DEVPATH=/devices/pci0000:00/0000:00:04.0/0000:05:00.0/host0/target0:0:0/0:0:0:0/block/sdc/sdc1
E: DEVNAME=/dev/sdc1
/devices/pci0000:00/0000:00:04.0/0000:05:00.0/host0/target0:0:0/0:0:0:0/block/sdc
name: /dev/sdc
/devices/pci0000:00/0000:00:04.0/0000:05:00.0/host0/target0:0:0/0:0:0:0/block/sdc/sdc1
name: /dev/sdc1
<6>[51502.295211] sd 0:0:0:0: [sdc] Unhandled error code
<6>[51502.295226] sd 0:0:0:0: [sdc] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
<3>[51502.295246] end_request: I/O error, dev sdc, sector 1465148528
Deleted repetitive lines]
SysFS ID: /class/block/sdc
Device File: /dev/sdc
Device Files: /dev/sdc, /dev/block/8:32, /dev/disk/by-path/pci-0000:05:00.0-scsi-0:0:0:0
SysFS ID: /class/block/sdc/sdc1
Device File: /dev/sdc1
Device Files: /dev/sdc1, /dev/block/8:33, /dev/disk/by-path/pci-0000:05:00.0-scsi-0:0:0:0-part1

So how did partition deletion cause this issue, and how do I correct the problem? It is possible that my difficulties encrypting the first new drive are related (it’s not my first time doing it successfully). It seems the problem is in the Kernel or configuration. I have invested many hours in forums and on google - tried dozens of possible fixes. I’m beginning to suspect system corruption or a bug, however all other system functions are working perfectly.

Any thoughts or suggestions will be * enthusiastically* welcomed.

Hm, it is all a bit confusing. Befor we carry on, can you please use the CODE tags around computer text. You tries to make things more readable by making B some text, but CODE is much better. And then simply the whole text, thus not:
# fdisk sdc results in: Unable to open sdc
but:

boven:~ # fdisk sdc

Unable to open sdc
boven:~ #

Which immedialtly shows one of the things that confused me because:

boven:~ # fdisk sda

Unable to open sda
boven:~ # fdisk -l

Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x1549f232

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         262     2104483+  82  Linux swap / Solaris
/dev/sda2             263        2873    20972857+  83  Linux
/dev/sda3            2874       15000    97410127+  83  Linux
/dev/sda4   *       15001       38913   192081172+   f  W95 Ext'd (LBA)
/dev/sda5           15001       17611    20972826   83  Linux
boven:~ #

shows that just saying* fdisk sdc* is not correct (when you want to know if fdisk sees* /dev/sdc*). And it would be nice to have the* fdisk -l *output.

You say: the system detects the insertion. How do you know?

Also when I understand correct that all the disks you put into the dock show problems, you should not exclude a hardware problem.

I’m keeping my fingers crossed that I may be able to contribute something here but I hope someone else chips in, even if it to tell me I am talking ****.

My guess is the symptoms you describe are a legacy of your deleting the partition on the first “removable” drive on a second PC. I think that the residue on your first PC from the sdc assignment is probably messing things up since you can’t now deassign sdc and you can’t remount it. I recommend that you make use of your backups to get rid of the residue. If you haven’t made backups, I’m sorry I do not know enough to tell you where to look for all of the offending stuff. In a similar situation I would have to re-install to fix the problem (not quite true because I backup regularly).

Having been a bit negative so far I’ll now try to be more helpful.

If you examine the contents of directory /dev/disk you will find the 4 directories, by-id, by-label, by-path and by-uuid. The contents of these directories demonstrate that for each connected drive and partition the OS records a whole host of information. If you consider the uuid information (uuid stands for universally unique identifier) you might expect trouble if you try to automount a drive at startup (after using the Partitioner to permanently assign the first drive in the docking station as sdc) with a different drive in the docking station. When the OS tries to mount the drive assigned as sdc it will be unable to find the drive with the uuid recorded when you first assigned sdc in the Partitioner. I would recommend that with a (pseudo) “removable drive” system that you forget about permanently assigning such drives as sd-whatever. Allow the OS to reassign “removable” drives each time they are discovered. Your will always be able to find any connected yet unmounted drives using Nautilus (Gnome) or Dolphin (KDE). All you do to mount the drive is to left mouse click on the icon for the unmounted drive and the enter the root password in the resulting dialogue box to mount the drive (but be patient since the OS will check the drive etc before you will get to see the contents).
What I suggest you do in the Partitioner when partitioning/formatting any new drive in the docking station, is to choose Do not mount partition in the Mounting Options section.
One application that you might find tremendously helpful in this situation is “Gnome Disk Utility” because it has the unique ability to relabel Ext4 formatted drives (you can’t do this in the Partitioner). That way when you can give your “removable” drives useful names such as “Backups1” etc and this will probably facilitate mounting from the command line if you need to do it.

I hope I’ve remembered everything here in one go, but if I have it will be a first.

Terry.

*** Problem solved ***

Hvv: I think you are saying, to eliminate confusion, to cut and paste the text output instead of using a shorthand format. I thought it better to simplify and reduce the size of the post, but I see how my idea of “better” may be confusing. I will use your suggestion for future posts.

Terry: It’s hard to make sense of so much detail, and I appreciate your valuable input. This is what happened: I installed Gnome Disk Utility and had some trouble getting it up so I rebooted. When that happened, the external drive began making its familiar handshake sound as SUSE probed the drive BIOS. I checked fdisk and /dev/sdc1 was there!! I mounted it and immediately began the backup process. It’s happily drinking up data as I write this. How do you explain that?? Whatever the cause, it appears that the installation of the disk utility corrected it along with the humidity level in Asia and the alignment of the planets. So I’m up and running after 7 days of frustration.

I am researching grub and persistent drive assignments - I don’t fully understand how to implement it so that grub doesn’t get confused with devices or CD/DVD media present during boot, so I boot without external drives or media active and later hit the power button when I’m ready to backup; SUSE then detects and reports the device as active and I mount it manually. That will have to do until I can clear my desk and my brain to figure it all out. For the time being, I am hoping that manually mounting the external drives to a common directory will suffice.

Thank you both for the help.

Gene

Nice you have sorted it out!

I can understand that people (you are not alone) try to post from the computer output what they think is important. But the fact that they ask for help is often because they are looking in the wrong direction and thus not showing everything forces the helper to follow the same wrong direction. Also the CODE tags save the lay-out of the computer output and thus make it better readable.

I do not know if this will increase your knowledge about partitions, etc, but you may want to read SDB:Basics of partitions, filesystems, mount points - openSUSE

Unless you booted with the docking drive switched off (under these circumstances the OS (Operating System) just seems to ignore an unexecutable permanent sd-whatever drive assignment (yes I have observed this myself) ), no I cannot explain your stroke of luck either.

Some background: I made similar but not identical mistakes with Antec MX-1 SATA hdd housings (eSATA/USB2.0 interface). If I recall correctly, in my case I had used the Expert Partitioner to permanently mount an eSATA drive with Fstab options set to Mount in etc/fstab by Volume label (I had previously labeled this and other drives using Acronis Disk Director). The consequences were that when I substituted different drives, each one mounted with the same permanent assignment Volume label rather than their actual individual labels. This experience led me to the non-permanent (equates with interchangeable) drive policy that I have recommended to you.

I’m glad you have found a solution to your immediate problem whatever the reason.

Terry.