openSUSE 11.1 32 bit DVD
md0 RAID1 swap
md1 RAID1 /
After installation on /dev/md1 with “/” system reboots and shows:
md: md1:raid array is not clean – starting backround c
raid1: raid set md1 active with 2 out of 2 mirrors
md1: bitmap initialized from dist: read 30/30 pages, set 700520 bits
created bitmap (465 pages) for md1
mdadm: /dev/md/1 hashem started with 2 drives
Trying normal resume from /dev/md0
resume device /dev/md0 not found (ignor
Trying normal resume from /dev/md0
resume device /dev/md0 not found (ignor
Waiting for device /dev/md1 to appear: ok
invalid root file system – exiting to /bin/sh
$
I have the same problem. with my raid 1. However, in my case when I start the machine the Grub load and display the error mention here, I press “Enter” then type “exit” it will exit the busybox and resume the boot process as usual.
<3>Unable to find swap-space signature
<6>EXT3 FS on md1, internal journal
<6>device-mapper: uevent: version 1.0.3
<6>device-mapper: ioctl: 4.14.0-ioctl (2008-04-23) initialised: dm-devel@redhat.com
<6>md: md0 stopped.
<6>md: bind<sdb1>
<6>md: bind<sda1>
<6>raid1: raid set md0 active with 2 out of 2 mirrors
<6>md0: bitmap initialized from disk: read 1/1 pages, set 0 bits
<6>created bitmap (9 pages) for device md0
<6>loop: module loaded
<4>fuse init (API version 7.9)
Kernel logging (ksyslog) stopped.
Kernel log daemon terminating.
Boot logging started on /dev/tty1(/dev/console) at Tue Dec 23 09:49:47 2008
mdadm: /dev/md/1 has been started with 2 drives.
Trying manual resume from /dev/md0
resume device /dev/md0 not found (ignoring)
Trying manual resume from /dev/md0
resume device /dev/md0 not found (ignoring)
Waiting for device /dev/md1 to appear: ok
invalid root filesystem – exiting to /bin/sh
$
$ exit
exit
Mounting root /dev/md1
Boot logging started on /dev/tty1(/dev/console (deleted)) at Tue Dec 23
09:51:15 2008
done
Starting udevd: done
Loading drivers, configuring devices: done
Loading required kernel modules
doneActivating swap-devices in /etc/fstab…
failedChecking root file system…
fsck 1.41.1 (01-Sep-2008)
/dev/md1: clean, 131936/60915712 files, 4692702/243661837 blocks
doneSetting up the hardware clockdone
Activating device mapper…
done
Starting MD Raid mdadm: /dev/md/0 has been started with 2 drives.
failed
Checking file systems…
fsck 1.41.1 (01-Sep-2008)
donedone
Mounting local file systems…
/proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
udev on /dev type tmpfs (rw)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
nothing was mounted
doneLoading fuse module done
Mounting fuse control filesystemdone
<notice>killproc: kill(614,29)
Creating /var/log/boot.msg
doneSetting current sysctl status from /etc/sysctl.conf
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.rp_filter = 1
fs.inotify.max_user_watches = 65536
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
done
Activating remaining swap-devices in /etc/fstab…
doneMounting securityfs on /sys/kernel/security done
Loading AppArmor profiles Enabling syn flood protectiondone
Disabling IP forwardingdone
done
Setting up hostname 'server3’done
Setting up loopback interface lo
lo IP address: 127.0.0.1/8
IP address: 127.0.0.2/8
done
done
System Boot Control: The system has been set up
Failed features: boot.md
Skipped features: boot.cycle
System Boot Control: Running /etc/init.d/boot.local
petrmatula schrieb:
> openSUSE 11.1 32 bit DVD
> md0 RAID1 swap
> md1 RAID1 /
I always thought this wasn’t even supposed to work, and have
therefore created a non-RAID /boot partition if the root
partition was to reside on a RAID. Of course if YaST let you
create such a configuration it should work, too, but perhaps
the bug is really with YaST not asking for a separate /boot
partition in this case anymore.
i have the same problem, but i have a non-raid /boot partition, a raid0 (striped) root partition (md1) and a raid1 (mirrored) /home partition (md0). funnily enough, i just pressed ‘ctl-alt-del’ each time the boot failed and after about 4/5 attempts it booted ok.
I do experience a very similar problem!
On the first reboot after installation (from 11.1 (32) DVD) I also do get some mdraid and file system errors and finally this one:
invalid root filesystem -- exiting to /bin/sh
My configuration is:
DELL Optiplex 8200
1 IDE disk on internal IDE controller
3 WD6400AAKS 640GB SATA2 disks connected to a Promise SATA300 TX4 (rev 02) controller (PDC40718)
On the IDE I just created the \boot partition (booting from the Promise controller seems not to be supported by my BIOS)
On each SATA disk I did create the following primary partitions during setup:
sd[abc]1: 2 GB, swap
sd[abc]2 20 GB, FileSystem: xFD (Linux RAID)
sd[abc]3 570 GB, FileSystem: xFD (Linux RAID)
From these Linux Raid partitions I did create the following RAID5 disks:
- md0: RAID5 (sda2, sdb2, sdc2), 40 GB, Ext3, Mount:
- md1: RAID5 (sda3, sdb3, sdc3), 1.1 TB, Ext3, Mount: \home
After the first installation failed, I deleted all partition tables on the SATA disks and tried again with similar settings, but failed again.
When I looked into the actual partitioning with the partitioning tool provided in the emergency system (by selecting “Repair Installed System”), I found that none of the raid disks has been formatted at all… But I know that I did select ext3 when I did create the md disks…
I forgot to mention that hitting Ctrl+d (as described in the before mentioned https://bugzilla.novell.com/show_bug.cgi?id=445490)
helps in my case too. Nevertheless I do hope for a real fix of that bug.
Cheers,
Hardy
Trying to install OpenSUSE 11.1 with a software raid 1 on a new Dell Inspiron with the 2.2GHz Celeron and 2 GB ram.
Onto 2 WD Green 500GB SATA drives, so far I’ve tried following the partition suggestions on this page here: How to install openSUSE on software RAID - openSUSE and I’ve also tried just partitioning a separate, 1GB non-raid swap partition on each disk and then creating 1 huge root partition from the remaining space on both disks.
Each time it complains that the raid is not clean, and then after fiddling around a bit, drops me at a severely crippled bash prompt. I can’t even su to root to shutdown because the sbin folder doesn’t contain su or shutdown.
I also tried allowing the installation disk to repair the damage, and this looked promising as I saw it found and seemed to repair some errors in fstab. But the result was the same.
Needless to say, the fix mentioned above that says to edit a file in /lib/mkinitrd/scripts is no use to me as no /lib/mkinitrd/ directory exists on this computer as yet.