open suse 11 and raid failure after image-restore

Hi Newsgroup,

i’ve installed open suse 11.0 on a raid system containing 2 HDD’s (Raid 1).
The System is well functioning. With a backup-imaging software i have
created a full image of all partitions of the system. Now i change the old 2
HDD’s with 2 bigger ones. After reinstall the image suse cannot boot and the
system hangs. the raid with the 2 new discs is propper working, it is not a
hardware problem.
my opinion, the raid system with the old disks has a unique number written
in one or more startup-file(s). after changing the discs, suse cannot find
the raid under this number!???
Can anybody help to solve this problem???
What must i do to bring the system with a new hardware to live.

oliver

Oliver wrote:

> i’ve installed open suse 11.0 on a raid system containing 2 HDD’s (Raid
> 1). The System is well functioning. With a backup-imaging software i have
> created a full image of all partitions of the system. Now i change the old
> 2 HDD’s with 2 bigger ones. After reinstall the image suse cannot boot and
> the system hangs. the raid with the 2 new discs is propper working, it is
> not a hardware problem.

Dunno if image cloning also works with raid volumes :-?

Anyway, booting hangs with any error message?

> my opinion, the raid system with the old disks has a unique number
> written in one or more startup-file(s). after changing the discs, suse
> cannot find the raid under this number!???
> Can anybody help to solve this problem???
> What must i do to bring the system with a new hardware to live.

Well, yes. You cloned the whole system so you also cloned the /etc/fstab.
Just adapt the fstab entries to point the current devices and not the old
ones.


ls -l /dev/disk/*


But maybe you are facing another issue…

Greetings,


Camaleón

Hi,

yes, the image works with other raid-systems (raid1 or raid5) under
windows well.
in the past, i used this software to restore a linux-system with one hdd.
it seems the problem is the internal “raidname” under linux.
the system starts and generates the following messages

1.could not find /dev/mapper/isw_ddedgibjfd_ARRAY_part2
2. want to fall back to /dev/mapper/isw_ddedgibjfd_ARRAY_part2? (Y/n)
3. if i choose ‘Y’ → not found – exiting to /bin/sh
4. if i choose ‘n’ → exiting to /bin/sh

i think the problem is to find out the current /dev/mapper/isw-<ARRAYNAME>.
this name /dev/mapper/isw_ddedgibjfd_ARRAY_part2 given during the first
linux-installation) on raid5.
my it is possible to write back the new arrayname in a bootfile?
ls -l /dev/disk* shows me subdirectories by-id, by-path and by-uuid
under subdirectory by-id i can find some ARRAY-Enties.

with ls -l command i can see the tree beginning from the root. after
changing to /etc i can’t find the fstab-file.

Oliver

“Camaleón” <noelamac@no-mx.forums.novell.com> schrieb im Newsbeitrag
news:ohY6m.3161$r8.176@kovat.provo.novell.com
> Oliver wrote:
>
> > i’ve installed open suse 11.0 on a raid system containing 2 HDD’s (Raid
> > 1). The System is well functioning. With a backup-imaging software i
have
> > created a full image of all partitions of the system. Now i change the
old
> > 2 HDD’s with 2 bigger ones. After reinstall the image suse cannot boot
and
> > the system hangs. the raid with the 2 new discs is propper working, it
is
> > not a hardware problem.
>
> Dunno if image cloning also works with raid volumes :-?
>
> Anyway, booting hangs with any error message?
>
> > my opinion, the raid system with the old disks has a unique number
> > written in one or more startup-file(s). after changing the discs, suse
> > cannot find the raid under this number!???
> > Can anybody help to solve this problem???
> > What must i do to bring the system with a new hardware to live.
>
> Well, yes. You cloned the whole system so you also cloned the /etc/fstab.
> Just adapt the fstab entries to point the current devices and not the old
> ones.
>
> ***
> ls -l /dev/disk/*
> ***
>
> But maybe you are facing another issue…
>
> Greetings,
>
> –
> Camaleón

Oliver wrote:

> yes, the image works with other raid-systems (raid1 or raid5) under
> windows well.
> in the past, i used this software to restore a linux-system with one hdd.

O.k. :slight_smile:

> it seems the problem is the internal “raidname” under linux.
> the system starts and generates the following messages
>
> 1.could not find /dev/mapper/isw_ddedgibjfd_ARRAY_part2
> 2. want to fall back to /dev/mapper/isw_ddedgibjfd_ARRAY_part2? (Y/n)
> 3. if i choose ‘Y’ → not found – exiting to /bin/sh
> 4. if i choose ‘n’ → exiting to /bin/sh
>
> i think the problem is to find out the current
> /dev/mapper/isw-<ARRAYNAME>.
> this name /dev/mapper/isw_ddedgibjfd_ARRAY_part2 given during the first
> linux-installation) on raid5.
> my it is possible to write back the new arrayname in a bootfile?
> ls -l /dev/disk* shows me subdirectories by-id, by-path and by-uuid
> under subdirectory by-id i can find some ARRAY-Enties.

Then browse /dev/mapper to find the new array partition. Maybe it has a new
designation. Try listing with “ls -l /dev/mapper/*” and put here the
result.

> with ls -l command i can see the tree beginning from the root. after
> changing to /etc i can’t find the fstab-file.

I’ll try to explain what I think is happening.

As you cloned the whole system into another pair of disks, you have also to
edit the /etc/fstab file in order openSUSE can boot from the new devices.
Put here the content of your /etc/fstab file so we can take a look.

Greetings,


Camaleón

Hi Cameleon,

i changed the hardware to the old config.
i started the system with the old hdd’s and i checked the files in /etc.
fstab are existing (otherwise boot fails) and also other files like
alias.db.
this files would be created at the time the linux system is installed.
so far so good
now i build in the new discs with the image.
cd
cd etc
ls -l

the following files and subdirectories are existing

group
modprob.conf.local
modprobe.d → directory
mtab
nsswitch.conf
passwd
scsi_id.conf
splashy → directory
sysconfig → directory
udev → directory

i can’t find more files and directories!!!
fstab must exist after a correct cloning.
it seems the image is not complete.
the other systems i cloned before with this software was set up by reiserfs.
the filesystem i used now is ext3.
what do you think about?
did you have a information of a good cloning software for ext3 and
swap-partitions for raid systems?

“Camaleón” <noelamac@no-mx.forums.novell.com> schrieb im Newsbeitrag
news:qFk7m.3489$r8.138@kovat.provo.novell.com
> Oliver wrote:
>
> > yes, the image works with other raid-systems (raid1 or raid5) under
> > windows well.
> > in the past, i used this software to restore a linux-system with one
hdd.
>
> O.k. :slight_smile:
>
> > it seems the problem is the internal “raidname” under linux.
> > the system starts and generates the following messages
> >
> > 1.could not find /dev/mapper/isw_ddedgibjfd_ARRAY_part2
> > 2. want to fall back to /dev/mapper/isw_ddedgibjfd_ARRAY_part2? (Y/n)
> > 3. if i choose ‘Y’ → not found – exiting to /bin/sh
> > 4. if i choose ‘n’ → exiting to /bin/sh
> >
> > i think the problem is to find out the current
> > /dev/mapper/isw-<ARRAYNAME>.
> > this name /dev/mapper/isw_ddedgibjfd_ARRAY_part2 given during the first
> > linux-installation) on raid5.
> > my it is possible to write back the new arrayname in a bootfile?
> > ls -l /dev/disk* shows me subdirectories by-id, by-path and by-uuid
> > under subdirectory by-id i can find some ARRAY-Enties.
>
> Then browse /dev/mapper to find the new array partition. Maybe it has a
new
> designation. Try listing with “ls -l /dev/mapper/*” and put here the
> result.
>
> > with ls -l command i can see the tree beginning from the root. after
> > changing to /etc i can’t find the fstab-file.
>
> I’ll try to explain what I think is happening.
>
> As you cloned the whole system into another pair of disks, you have also
to
> edit the /etc/fstab file in order openSUSE can boot from the new
devices.
> Put here the content of your /etc/fstab file so we can take a look.
>
> Greetings,
>
> –
> Camaleón

Oliver wrote:

> i changed the hardware to the old config.
> i started the system with the old hdd’s and i checked the files in /etc.
> fstab are existing (otherwise boot fails) and also other files like
> alias.db.
> this files would be created at the time the linux system is installed.
> so far so good
> now i build in the new discs with the image.
> cd
> cd etc
> ls -l
>
> the following files and subdirectories are existing
>
> group
> modprob.conf.local
> modprobe.d → directory
> mtab
> nsswitch.conf
> passwd
> scsi_id.conf
> splashy → directory
> sysconfig → directory
> udev → directory
>
> i can’t find more files and directories!!!

That doesn’t sound so good :frowning:

> fstab must exist after a correct cloning.

Yes, sure. After restoring, partition should contain the same structure,
files and folders than the original.

> it seems the image is not complete.

What imaging program are you using to clone? :-?

> the other systems i cloned before with this software was set up by
> reiserfs. the filesystem i used now is ext3.
> what do you think about?

Mmm… I’d expect no problems related to filesystem. ReiserFS and ext3 are
both very well supported by most cloning programs, otherwise it should be
specified.

> did you have a information of a good cloning software for ext3 and
> swap-partitions for raid systems?

Last time I cloned a suse system I used “Clonezilla” (live CD), but it was a
standard cloning (non-raid). I really don’t know if this program works with
a soft-raid environment :-?

Greetings,


Camaleón

Hi Cameleon,

i’ve downloaded the clonzilla software and i start a image.
same result. the filesystem is like the image with acronis software.
maybe the raid-controller is the problem.
at this moment i have no idea to solve this problem.
the raid-controller is onboard and when i try to connect the hdd’s to
another raid-controller i lost all my data.

“Camaleón” <noelamac@no-mx.forums.novell.com> schrieb im Newsbeitrag
news:Z5G7m.3824$r8.1700@kovat.provo.novell.com
> Oliver wrote:
>
> > i changed the hardware to the old config.
> > i started the system with the old hdd’s and i checked the files in /etc.
> > fstab are existing (otherwise boot fails) and also other files like
> > alias.db.
> > this files would be created at the time the linux system is installed.
> > so far so good
> > now i build in the new discs with the image.
> > cd
> > cd etc
> > ls -l
> >
> > the following files and subdirectories are existing
> >
> > group
> > modprob.conf.local
> > modprobe.d → directory
> > mtab
> > nsswitch.conf
> > passwd
> > scsi_id.conf
> > splashy → directory
> > sysconfig → directory
> > udev → directory
> >
> > i can’t find more files and directories!!!
>
> That doesn’t sound so good :frowning:
>
> > fstab must exist after a correct cloning.
>
> Yes, sure. After restoring, partition should contain the same structure,
> files and folders than the original.
>
> > it seems the image is not complete.
>
> What imaging program are you using to clone? :-?
>
> > the other systems i cloned before with this software was set up by
> > reiserfs. the filesystem i used now is ext3.
> > what do you think about?
>
> Mmm… I’d expect no problems related to filesystem. ReiserFS and ext3 are
> both very well supported by most cloning programs, otherwise it should be
> specified.
>
> > did you have a information of a good cloning software for ext3 and
> > swap-partitions for raid systems?
>
> Last time I cloned a suse system I used “Clonezilla” (live CD), but it was
a
> standard cloning (non-raid). I really don’t know if this program works
with
> a soft-raid environment :-?
>
> Greetings,
>
> –
> Camaleón

Oliver wrote:

> i’ve downloaded the clonzilla software and i start a image.
> same result. the filesystem is like the image with acronis software.

Oh, I see. It seems soft-raid is not supported (from clonezilla faq) :frowning:


Does Clonezilla support RAID ?
Clonezilla does support hardware RAID, if your RAID device is seen
as /dev/sda, /dev/sdb, /dev/hda, /dev/hdb, /dev/cciss/c0d0… on GNU/Linux.
Clonezilla does support this.

On the other hand, if it’s Linux software RAID, no, Clonezilla does not
support that.


> maybe the raid-controller is the problem.

Not the fake-raid controller itself, but the use of software raid and
cloning.

> at this moment i have no idea to solve this problem.

Well, there is the standard way: copy.

There used to be a mini-faq listing the steps involved in this, for standard
disk, though, not sure if that will work for raid systems:

http://tldp.org/HOWTO/Hard-Disk-Upgrade/copy.html

You will have to tweak some things (Grub, /etc/fstab and raid rebuild) but
it is worth a try :-?

Note: always make a backup copy and keep it a safe place before doing any
copy operation.

> the raid-controller is onboard and when i try to connect the hdd’s to
> another raid-controller i lost all my data.

As you are using “dmraid” (and not “md”) I guess yes, you will need to put
your drive in a system that has the same raid controller.

Raid is no an easy setup. I would never recommend it for home users. A good
backup strategy is far better and easy to mantain than a firmware based
soft-raid :frowning:

Greetings,


Camaleón