installl new drive w/o messing system up

The drives [internal Sata II,recently changed from IDE Enhanced to AHCI in BIOS] I am speaking of are only storage drives, but are setup to be auto mounted at boot time thru YAST2 [and have lines in fstab].

NOTE: I have not yet tried to remove one and reboot in AHCI mode.

Maybe I have set them up improperly because everytime I disconnect one [even after commenting out the line in fstab] and then reboot my OS [openSUSE 11.0] I get sent to a blackscreen that asks me for system password and sends me to [repairfilesystem]# and I can’t start OS. If I try to repair OS it does not work and I end up having to reinstall. Before it takes me to the prompt for repairfilesytem it shows errors in either / and|or /home partitions.

NOTE: I do not plan on disconnecting these drives often, but will have to remove one in the near future [it is backup] and replace it with a new drive that is a replacement for a failed one. I will want to copy data from Backup drive to new drive, then shut down and remove backup drive and reboot again.

There must be something I am doing wrong. [google and other searches produce no similar problems] So I guess I am looking for a step by step procedure of doing this to maybe see where I am going wrong.

They are not ‘auto mounted’ by YaST because:

  1. Auto mounting is something different (NFS comes in here);
  2. they are mounted at boot time by the system because they are in /etc/fstab (YaST only did put those entries there (or you did it)).

Better show us the corresponding lines from your /etc/fstab, so we can see where they are supposed to be mounted.

Sorry I am so long in getting back. Thank you for reply.
Here is the file contents of fstab, the red lines are the storage drives.

/dev/disk/by-id/scsi-SATA_WDC_WD800AAJS-0_WD-WMAP99087155-part1 swap swap defaults 0 0
/dev/disk/by-id/scsi-SATA_WDC_WD800AAJS-0_WD-WMAP99087155-part2 / ext3 acl,user_xattr 1 1
/dev/disk/by-id/scsi-SATA_WDC_WD800AAJS-0_WD-WMAP99087155-part3 /home ext3 acl,user_xattr 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
/dev/disk/by-id/scsi-SATA_WDC_WD3200AAKS-_WD-WMAV22446300-part1 /home/dabud/MusicShare ext3 defaults 1 2
/dev/disk/by-id/scsi-SATA_WDC_WD1600AAJS-_WD-WMAP9E656449-part1 /home/dabud/MusicStore ext3 defaults 1 2

You have them mounting in your /home (/home/dabud/MusicShare)
I’m not sure that would be my choice (or even if it could be the problem)

Personally I have Storage drives in the / (tree)
so a mount point would be eg: /MusicShare
You would need to create that .dir first. Or some users create folders in /media
So it would be /media/MusicShare

Worth a try.

It makes no difference where in the tree it is mounted unless it is a sys mount, don’t think musicshare is that.

I have my share mounted in my home folder why would I want it at /?

Also removing a drive wouldn’t suddenly show a problem, with mount points, I’d have a look at the uuids before/after removing hdd.

More I think, I reckon its related to initrd and fs modules. Though I don’t understand why it did work and doesn’t unless / changed disc.

Unless I’m mistaken dropping to emergency console is after kernel but before mounts, may even be before the kernel. Someone else will need to clarify.

typo not fs meant disc modules.

I am also a bit late, was away a few days.

Your fstab shows them as normal diskpartitions, to be mounted on boot.
Of course it is correct to let them be mounted at the place of your liking.
Now from the Linux point of view, when you comment the entry out in /etc/fstab (by placing a # at the beginning of the line) and you them stop the system, remove the drive and do a reboot, no mount must be done (of course) and I do not see any reason why problems should arise. This is nothing new to you as you thought the same.

As FeatherMonkey already pointed out, this is rather strange and at a strange point. The only thing I can think of now, is the BIOS throwing in something that makes the kernel think something is utterly wrong. Disableing/removing the removed disk in the BIOS before the reboot might help in that case.

Now about those errors about your /home partition, is it ppossible to have a better description what the are? I understand it is not possible to cut/paste that here, so you must revert to writing/typing.

Thanks hcvv missed the erroring I think this is related to menu.lst. I reckon it’s looking at something it isn’t expecting, home instead of root, or a small typo.

I had something similar due to a typo, kept dropping me to rescue filesystem, caught the typo and solved it.

Get to cli grub think esc sure it gives you choice on the screen. This is the same syntax as menu.lst.

root (hd0,0)
kernel /vmlinuz root=/dev/sd**
initrd /initrd

Now I think root= is erroring for some reason, which is why I would resort to cli grub and some trial and error. Now personally I’m not to keen on by-* and resort to /dev/ but I doubt this will suit you. So you would need to choose from by-uuid/id/path/label perhaps by label.

Whilst trying to solve it I suspect /dev/ will be the easiest then fix menu.lst. If you have tree

tree /dev/disk

otherwise ls.

I’m not sure though what tools you have(@ rescueprompt), as iirc you have a limited set with out doing mounts. So if trying to use by- maybe easy first getting the info using a live disc, maybe even use the suse rescue. I would resort to /dev/ then solve that after booting myself.

Well, let us wait on LaQuirrELL. He has some things to work out now.

In the mean time, I do not understand why you are not seeing the avantages of the usage of /dev/disk/by- schemas. In this case e.g. when he removes both disks and plugs them in after a few weeks, but uses different places (by not knowing exactly where the disks were connected earlier), the by-* will see that the disks are mounted on the right mount points. Else they will be on each others mount point (or not at all).

These by-* are nothing to worry about. They are only symlinks to the /dev/sd* special files. and these symlinks ar created during boot. So when
/dev/disk/by-id/scsi-SATA_WDC_WD3200AAKS-_WD-WMAV22446300-part1 links to /dev/sdc then, after you put the cable in the other place inside the system and reboot
/dev/disk/by-id/scsi-SATA_WDC_WD3200AAKS-_WD-WMAV22446300-part1 may now link to /dev/sdd.
Mounting by-id will mount the partition on the right mount point. In the old times you had to change /etc/fstab to change sdc in sdb. So what is more easy/save?

I absolutely agree in this situation dev/by-* is better. As for me HDD’s rarely change, and stumbling through cli grub remembering /dev/sd* means I only have 2 characters get wrong. Those other strings are huge, one typo and it can’t find the disk all thumbs here I’m no typist.

Then iirc I thought it wasn’t to easy to get the strings from /dev/disk/by- from cli grub but been a while since I tried. Though also having no reason to, and dev/sd* is just so much easier to remember.

Of course, working with GRUB in the CLI the /dev/sd. method is much, much easier. But when you are dong this, I think that at that moment you are very much aware what is what on your system.

Edit: at that phase of booting the system I doubt that the by-* do work already.

But for the usage in fstab … . I see (I am a dreamer) possiblities for e.g. by-label. You have your homedir on a partition having the label HomeofHenk. and on several systems fstab says that this must be mounted on /home/henk. You can move easily between systems with all your stuf.

ty @ hcw and feathermonkey

I think my problem was 2 fold and things [unexplainable by newbie = ME] were happening.
I think one problem has something to do with trying run sata II drives [in openSUSE] under the enhanced IDE in BIOS. As soon as I switched to AHCI mode in BIOS [after I upgraded m/board from Asus P5KC to ASUS P5Q Deluxe] things seemed to run a lot smoother.

Another problem I was having was I ran into a run of failed hard drives [3 in a row][confirmed by using Smartmontools]. I have since replaced the hard drives [using my backup drive] and am awaiting replacements.
Once, I receive these new drives I will be plugging them in and transferring the data from my backup drive and then removing my backup drive.
At that time I will identify them as disk-by-id in the mounting process [Yast2] and mount them in my folder in the /home drive, comment out the line in fstab for the old drive and reboot and cross my fingers.
I don’t know if the os will fail again, but since I last installed it after I switched to AHCI mode and got rid of the bad hard drives, it has been running well.
If you have any further helpful comments and/or suggestions, I would appreciate it.

It’s not clear whether you were stating or speculating, but the syntax already exists, you write in fstab:

LABEL=HomeofHenk /home/henk ...

Not sure why there’s no /dev/disk/by-label directory, maybe udev doesn’t read labels or it’s FS dependent.

It doesn’t have to. GRUB uses (hd0,0) type syntax to refer to partitions. When it uses /dev/disk/by-id/blahblah, you notice these are kernel parameters, e.g. root=/dev/disk/by-id/blahblah resume=/dev/disk/by-id/foobar. So they are not interpreted by GRUB but by the kernel.