OpenSuse 11.0 Raid5 LVM logical volume not mounting.

I have a raid 5 with 5 disks, I had a disk failure which made my raid go down, after some struggle I got the raid5 up again and the faulty disk was replaced and rebuilt itself. After the disk rebuilt itself I tried doing a pvscan but could not find my /dev/md0. I followed some steps on the net to recreate the pv using the same uuid then restored the vg(storage) using a backup file. This all went fine.

I can now see the PV, VG(storage) and LV’s but when I try to mount it, I get a error “wrong fs type” I know that the lv’s are reiserfs filesystems, so I did a reiserfsck on /dev/storage/software, this gives me the following error:
reiserfs_open: the reiserfs superblock cannot be found

Now next step would be to rebuild then superblock, but I’m afraid that I might have configured something wrong on my raid or LVM and by overwriting the superblock I might not be able to go back and fix it once I’ve figured out what I didn’t configure correctly…

Any help would be greatly appreciated. Thanks again
Signed. Paranoid User…:slight_smile:

Hmmm I thought that a 5 disk configuration is a RAID 10. RAID 5 usually has 4 disks.

Are you sure about LVM being reiserfs? Again that does not seem right.

Yes. was orinally a 4+1 but the spare got sucked into the 5 Disk raid set to maximize storage capacity. The disk layout is “software raid 5(/dev/md0)” -> PV-> LVM VG (storage) -> 3x LV’s -> reiserfs filesystems. I created the raid about 6 or 7 years ago maybe even longer, and has stood the test of time except for the occasional disk replacement. Also a bunch of additional PV’s mostly direct attached usb storage to grow the lv’s. This is a little home nas storage.I

You do have an odd situation. I assume the RAID is software here? Might help to post fdisk -l output. I admit this is beyond me but the more hard info you can post the bigger the chance someone can see the problem.

Not much help to you but I had the same problem on a single scsi disc using reiserfs. I lost all of the data on it and had to bin the disc. I’ve not used reiserfs since. In my case the problem was down to an updated reiserfs driver. I have tried ext4 but found that comments that I have seen suggesting that it’s rather slow are true I now use ext3. As a for instance installation took nearly twice as long on ext4.

I notice you seem to be using soft raid. I did that for a while but have switched to a raid controller for my home directory. That way odd bits of software can’t get confused which I suspect is what has happened in your case. Your superblock was distributed and hasn’t been rebuilt correctly. My problems occurred some years ago but it may be worth you looking at the various reiserfs utilities that are about. All I can remember is that they were of no help to me and the disc was my treasure chest so as to speak so I spent many many hours trying to recover it.

These days if I wanted to buy a raid controller it would be a sata type as scsi discs are too expensive. Even enterprise sata are cheaper even though the internals are likely to be exactly the same as the equiv scsi.

Should have added that min for raid 5 is 3 discs. This is a very useful set up really. I use 3x73gb scsi for my home directory which gives me 137gb. A drive went down a few weeks ago which is why I upgraded to 11.4. The discs have been in for about 5 years and the one that was running hotest failed 1st. I had installed a 6bay hot swap bay into my tower and placed the discs right next to each other. The disc could have been reformatted and then the array rebuilt while I continued using my PC. I ran it on 2 discs for a while and decided it was time to change them. The only problem with running on 2 or rebuilding a new one is the aweful high pitched whistle the raid cards give out to let people know there is a problem but at least I had time to back up and could continue to use my pc.

My set up as below may seem a little odd. I use 2 distinct sata channels for the system and swap as this allows better through put than putting the lot on the scsi raid. I did run 10k sata soft striped raid for the system disc but when suse dropped it as an option I found that 1 single disc was just as fast as far as use is concerned. Probably down to the fact that system loads are small and mostly cached in any case. I’m probably going to remove the hot swap bay and put the discs in my cases rather well ventilated drive bay.

I find a 130 odd gb home more than big anough even though I have maybe 30gb of data files on it and some multimedia on top of that. If more is needed a decent nas set up makes more sense. That area looks a bit difficult to me basically because most of these use discs that are likely to have problems in what I call short term. They keep putting more and more bytes on a platter. One very respected manufacturer quotes a failure rate of 0.34% per anum according to ebay. That’s pathetic to say the least, It’s rather strange that there aren’t any raid 5 three drive boxes about that do the job properly at a sensible price. I would be happy with low reliabilty drives in a box like that. Ideally hot swap but a set up that tells you which one needs changing and hopefully rebuilds while you work would be fine,