First time poster,Long Long time user of suse.
I have exhausted myself and Google searching howto install opensuse on raid10.
The kicker is that about 6 months ago I installed RAID10 on my 8core box with 4 1T sata drives.
I installed Opensuse 13.1 on my RAID10 ((dev/md0 = 1.8T) along side my 2 or three older versions of Opensuse.
I ran this system until 2 weeks ago when my motherboard died.
I re-established my RAID10 and other drives on my new mother board. which included re-raiding my drives.
all my data is still intact. I can boot my old installs. I can chroot to my /dev/md0 and look around and do
some limited stuff, even install software, but for the life of me I can’t find a way to boot into my installed
linux on /dev/md0. I even backed up /dev/md0 (some 350TB) and erased, re-raided from scratch and reinstalled
I still cannot get grub or rescue to boot /dev/md0. and yes I use a separate /Boot partition.
Has any one besides me ever ran Opensuse (desktop) on a raid drive?
OR have a running raid box. If so could you please post your /boot/grub directory.
I use “Grub” because I can manually configure menu.lst.
Grub2 is very anti social !.and Yast2 bootloader and Grub2 were very unco-operative.
So before I get rid of my RAI10 and go back to single drive can anyone help me?
My understanding is that the 12.3’s (which I assume we are speaking of) installer is borked when it comes to installing on existing FAKE RAID or Software RAID disks ( as far as I know it should be ok if a real hardware RAID card is used. The work around is to install 12.2 and then do a live upgrade via zypper to 12.3.
OK there are 3 types of RAID. Real hardware, FAKE( also known as BIOS assisted) and software
The installer for 12.3 and also AFAIK 13.1 beta does not work on pre set up FAKE and Software. It should with real hardware RAID since the RAID is transparent to the OS ie the OS thinks it is just a single disk the hardware does the grunt work.
Don’t install 12.1 since the upgrade can be a bit more painful when moving to 12.3. Install 12.2 and then move to 12.3 via zypper dup
I never understood setting up a fancy RAID and relying on a 50 cent chip on the main board. LOL never made sense to me. If you want reliability get up off the bucks and put a real card in. Take care though some boards are FAKE RAID on a card.
> I never understood setting up a fancy RAID and relying on a 50 cent chip
> on the main board. LOL never made sense to me. If you want reliability
> get up off the bucks and put a real card in. Take care though some
> boards are FAKE RAID on a card.
Agree about fake raid. But for RAID1 or RAID10 there’s no need for any
hardware assist. mdadm does just fine with plain JBOD. RAID5 or RAID6
are a different ball game; they do need hardware and they do need
thinking about recovery plans with big disks.
It didn’t go exactly like that but I at least got booted.
1; I had to use OpenSuse 12.2 for the install on md0 as suggested ??
2; I had bios then yast2 set up my RAID10
3; After install still no luck
4; for some reason I had to reprogram my bios raid
5; after a lot of searching through Gub2’s huge (I hate it) list of old kernels etc after the install
I found the config for md0 with an “old”? 3.4.XX kernel (haven’t used that one in ages)
that finally booted into md0. none of my half dozen newer kernels would boot it.
(I geuss I’ll bite the bullet and learn grub2)
6; The whole thing is still a dogs breakfast. a lot of upgrades and reloading backup files
it’s still a mess and and flaky as hell.
I am writing this from my raid install and I think I may have to erase the whole raid system and reload from scratch.
I still need to compile a new kernel to see if the whole problem belonged to mkinitrd.
The sad part is that when i backed up the old raid setup my hidden files didn’t get backed up ( a reconfiguring nightmare).
as an aside how is a 50 cent chip on a motherboard different than a 50 cent chip on a raid card?
In a search I found cards (raid10 capable) from $20.00 to $600.00 Who is satisfied with what card?
I’ll try and build a new installer disk with a newer kernel and os version.
If any one wants a follow uo let me know
This has been a first time poster success story.
If you don’t know how to program a pdp8/11, you don’t know computers. by Me
however my is a new install, I am not sure how to do it with an existing setup though.
I did find a way to set all up in the way I like though. with the 12.3 installer.
and I do use the grub2.
but my server is running on Raid-1 system disk with /root partition using btrfs.
also I did not fully tested this setup and any recovery steps needed.
I was mainly concern about disk failing, I did not even though of my MB or controller failing although not sure how to test that at all.
What I have done in the past is software RAID1 (Mirror) for /boot (1GB on all drives). Then you can RAID 5 / on the rest of the disks. BIOS will see a Mirror drive as a single drive, if you lose a drive the system will still boot and the RAID 5 will be in degraded mode. The trick is you have to install grub to the MBR on each drive pointing to HD0 for root. If Drive A (HD0) fails on boot drive B becomes HD0.
As to RAID cards they come in two flavors FAKE (ie that 50 cent chip on a card) or real hardware. With a real controller that does all the RAID logic. FAKE RAID requires software in the OS while real RAID does not require the OS to know anything about the disk setup. To the OS the array looks like a single disk.
I won’t recommend any certain card just look at the specs very carefully and make sure that the RAID is NOT a BIOS assist and supports Linux. Any cards I have seen that support real hardware seem to be in the $250+ range but just because the price is high does not always mean that it is true hardware RAID. Yes it is confusing and a lot of cards seem to be way over priced when the are really based on the same 50 cent chip found on some mother boards
I may go that route as my “install” is full of errors. After 3 “upgrades” by dvd neither yast2 nor zypper run properly.
$250 for a card is a little steep for me. My whole system – asus m3 MB, 16G mem, amd 8 core cpu and the 4 sata3-1TB drives cost me much less than $1000.00.
and nothing on it is life threatning, just lots of “stuff” I’ve been collecting for years that I’d hate to lose. It’s just one giant encyclopedia for me, and I’m retired.
I’ve decided to wipe my raid and start over clean.
RAID is not a backup. If you want to keep stuff back it up. RAID is for up time in case you lose a drive.
Again never saw a reason to RAID a desktop. It does make sense for servers that need 5 9s up time. Also with multi terabyte drives there are limits being reached with consumer grade drive error rates that make RAID actually dangerous. ie you have more bytes on the drive then the error rate which almost makes error a certainty . Commercial drives do have at least an order of magnitude larger error rates.
> $250 for a card is a little steep for me. My whole system – asus m3 MB,
> 16G mem, amd 8 core cpu and the 4 sata3-1TB drives cost me much less
> than $1000.00.
> and nothing on it is life threatning, just lots of “stuff” I’ve been
> collecting for years that I’d hate to lose. It’s just one giant
> encyclopedia for me, and I’m retired.
Then simply do not use raid. Seriously.
Use that money to buy backup disks instead, and keep copies of your
important stuff outside of the computer.
Notice that a raid does not protect you from disaster. It only protects
you from ONE particular type of disaster.
For example, if you get a bad power failure and the filesystem is
corrupted, BOTH copies get corrupted. Or you hit a software bug and it
destroys the filesystem… or there is an error and you delete a
directory, format a filesystem… all of that is a disaster on the
entire raided system.
Cheers / Saludos,
Carlos E. R.
(from 11.4, with Evergreen, x86_64 “Celadon” (Minas Tirith))
This whole episode was a great howto. I now have a fairly good understanding of RAID and I now understand that the whole thing
was pointless but I had to know. All it cost me was time and I have plenty of that.
This has been fun and I appreciate all the responses. I won’t be back here unless I have a new Howto to learn.
13.1 works great with RAID10. 12.3 works but always marks the array as dirty on a reboot/shutdown, causing a resynch when it comes back up. Two different version of mdadm. I was NOT able to upgrade the 12.3 to the mdadm 3.3 that comes with 13.1. I have production servers i am unable to upgrade off of 12.3 and currently using RAID10. I did not do my research on this before install and only recently noticed the resynch issue. Anyone have any ideas
On 2014-04-10 14:46, glenewhittenberg wrote:
> 13.1 works great with RAID10. 12.3 works but always marks the array as
> dirty on a reboot/shutdown, causing a resynch when it comes back up. Two
> different version of mdadm. I was NOT able to upgrade the 12.3 to the
> mdadm 3.3 that comes with 13.1. I have production servers i am unable to
> upgrade off of 12.3 and currently using RAID10. I did not do my research
> on this before install and only recently noticed the resynch issue.
> Anyone have any ideas
I suggest you create a new thread, with your problem, instead of hanging
on an old, partially related, thread. If you want help, we need to
concentrate on your post and be confused by old messages from other people.
Cheers / Saludos,
Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)