nvidia mcp55 raid problem

Hi guys, I just installed opensuse 11.4 64 on my workstation, everything is ok, except raid controller which has some strange behaviour.
I think every needed software is installed… when I execute

Host-002:~ # dmraid -s
*** Active Set
name   : nvidia_eaccbbae
size   : 5860575360
stride : 128
type   : raid5_ls
status : ok
subsets: 0
devs   : 4
spares : 0

Host-002:~ # ls -la /dev/mapper/
total 0
drwxr-xr-x  2 root root      200 Sep 10 10:43 .
drwxr-xr-x 19 root root     4540 Sep 10 10:43 ..
crw-------  1 root root  10, 236 Sep 10 10:38 control
brw-r-----  1 root disk 253,   0 Sep 10 10:38 nvidia_eaccbbae
lrwxrwxrwx  1 root root        7 Sep 10 10:38 nvidia_eaccbbae_part1 -> ../dm-1
lrwxrwxrwx  1 root root        7 Sep 10 10:38 nvidia_eaccbbae_part2 -> ../dm-2
lrwxrwxrwx  1 root root        7 Sep 10 10:38 nvidia_eaccbbae_part3 -> ../dm-3
brw-r-----  1 root disk 253,   4 Sep 10 10:43 nvidia_eaccbbaep1
brw-r-----  1 root disk 253,   5 Sep 10 10:43 nvidia_eaccbbaep2
brw-r-----  1 root disk 253,   6 Sep 10 10:43 nvidia_eaccbbaep3

but when I try to activate raid I get

Host-002:~ # dmraid -ay
RAID set "nvidia_eaccbbae" already active
RAID set "nvidia_eaccbbae" was not activated
RAID set "nvidia_eaccbbaep1" already active
RAID set "nvidia_eaccbbaep1" was not activated
RAID set "nvidia_eaccbbaep2" already active
RAID set "nvidia_eaccbbaep2" was not activated
RAID set "nvidia_eaccbbaep3" already active
RAID set "nvidia_eaccbbaep3" was not activated

any suggestion?
here is my lspci

Host-002:~ # lspci
00:00.0 Host bridge: nVidia Corporation C55 Host Bridge (rev a2)
00:00.1 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:00.2 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:00.3 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:00.4 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:00.5 RAM memory: nVidia Corporation C55 Memory Controller (rev a2)
00:00.6 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:00.7 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:01.0 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:01.1 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:01.2 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:01.3 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:01.4 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:01.5 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:01.6 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:02.0 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:02.1 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:02.2 RAM memory: nVidia Corporation C55 Memory Controller (rev a1)
00:03.0 PCI bridge: nVidia Corporation C55 PCI Express bridge (rev a1)
00:09.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2)
00:0a.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3)
00:0a.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3)
00:0b.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1)
00:0b.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2)
00:0d.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1)
00:0e.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3)
00:0e.1 RAID bus controller: nVidia Corporation MCP55 SATA Controller (rev a3)
00:0e.2 RAID bus controller: nVidia Corporation MCP55 SATA Controller (rev a3)
00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2)
00:0f.1 Audio device: nVidia Corporation MCP55 High Definition Audio (rev a2)
00:11.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3)
00:12.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3)
00:18.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3)
01:00.0 PCI bridge: nVidia Corporation NF200 PCIe 2.0 switch for mainboards (rev a2)
02:00.0 PCI bridge: nVidia Corporation NF200 PCIe 2.0 switch for mainboards (rev a2)
02:02.0 PCI bridge: nVidia Corporation NF200 PCIe 2.0 switch for mainboards (rev a2)
03:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 8800 GTS 512] (rev a2)
05:07.0 Network controller: Techsan Electronics Co Ltd B2C2 FlexCopII DVB chip / Technisat SkyStar2 DVB card (rev 01)
05:0b.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0)
06:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 8800 GTS 512] (rev a2)


thank you

Hello, welcome to the forums.

First I admit that I am not that fluently with these RAID configurations. But to me it seems that dmraid tells that everything is there (one RAID of four devices) and active. You also have the device special files in* /dev* for three partions (if I interprete correctly). Now what is the strange behaviour you mention?

sorry my info were not complete.


Host-002:/mnt # mount /dev/mapper/nvidia_eaccbbae /mnt/storage/
mount: /dev/mapper/nvidia_eaccbbae already mounted or /mnt/storage/ busy

suse says that’s already mounted, but in mount list there’s no raid mounted, and /mnt/storage is doing nothing

Host-002:~ # mount
devtmpfs on /dev type devtmpfs (rw,relatime,size=2021104k,nr_inodes=505276,mode=755)
tmpfs on /dev/shm type tmpfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
/dev/sda1 on / type ext4 (rw,relatime,user_xattr,acl,barrier=1,data=ordered)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/sda6 on /home type ext4 (rw,relatime,user_xattr,acl,barrier=1,data=ordered)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sda3 on /windows/C type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sda4 on /windows/D type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
securityfs on /sys/kernel/security type securityfs (rw,relatime)
none on /proc/fs/vmblock/mountPoint type vmblock (rw,relatime)

I don’t know if it’s important, but during activation command kernel log this

Sep 10 11:44:09 linux-k6qg kernel:  3939.361508] device-mapper: ioctl: device doesn't appear to be in the dev hash table.
Sep 10 11:44:09 linux-k6qg kernel:  3939.367216] device-mapper: ioctl: device doesn't appear to be in the dev hash table.
Sep 10 11:44:09 linux-k6qg kernel:  3939.384252] device-mapper: ioctl: device doesn't appear to be in the dev hash table.
Sep 10 11:44:09 linux-k6qg kernel:  3939.399211] device-mapper: ioctl: device doesn't appear to be in the dev hash table.


Again, I do not have such a RAID, thus my answer is just based on more common knowledge.

You fail to prove that /mnt/storage exists, it should. When it does, that is not a problem, But I can not see it exists.

You try to mount the whole disk (*/dev/mapper/nvidia_eaccbbae )and not one of it’s partitions (/dev/mapper/nvidia_eaccbbae_part1/2/3 *or the shorter names /dev/dm-1/2/3).

And that brings us to the next step: did you create any file systems on those partitions?

Hi Henk,
/mnt/storage exists with 777 permissions… and there’s already a partition in the raid array, ext4 file system…
but while writing I’m convincing myself that ext4 support is missing…
is it possible? in the latest opensuse release?

Host-002:~ # mount /dev/dm-3 /mnt/storage/
mount: you must specify the filesystem type

and the kernel log

Sep 10 12:43:01 linux-k6qg kernel:  7471.492353] FAT: invalid media value (0x4e)
Sep 10 12:43:01 linux-k6qg kernel:  7471.492357] VFS: Can't find a valid FAT filesystem on dev dm-3.
Sep 10 12:43:01 linux-k6qg kernel:  7471.492553] hfs: can't find a HFS filesystem on dev dm-3.
Sep 10 12:43:01 linux-k6qg kernel:  7471.492812] VFS: Can't find a Minix filesystem V1 | V2 | V3 on device dm-3.
Sep 10 12:43:01 linux-k6qg kernel:  7471.492959] REISERFS warning (device dm-3): sh-2021 reiserfs_fill_super: can not find reiserfs on dm-3
Sep 10 12:43:01 linux-k6qg kernel:  7471.493408] EXT3-fs (dm-3): error: can't find ext3 filesystem on dev dm-3.
Sep 10 12:43:01 linux-k6qg kernel:  7471.493528] EXT2-fs (dm-3): error: can't find an ext2 filesystem on dev dm-3.
Sep 10 12:43:01 linux-k6qg kernel:  7471.494764] ISOFS: Unable to identify CD-ROM format.
Sep 10 12:43:01 linux-k6qg kernel:  7471.494897] EXT4-fs (dm-3): VFS: Can't find ext4 filesystem

confused…

Maybe it is time (it was allready from the beginning :wink: ) that you tell what is the situation is (or what you think the situation is) on that disk. Because:

  1. You try to mount the whole disk where there are clearly three partitions, thus the disk has a partition history. Are you not aware of this?
  2. You try to mount partition #3 without providing a file system type. This means that mount wil first look for the missing information in /etc/fstab. You did not post /etc/fstab, but I guess there is no entry for this partition in there. Then mount tries to find out what the file system type is by looking inside the partition itself. Normaly mount does recognise *ext4. *The fact that it does not here, proves imho that there is no ext4 fs there. You state there is. Did you create it? Do you remember that for sure?

You doubting that there is support for ext4 in the OS is something you can yourself put in the dustbin because the list of mounted file systems you provided does show mounted ext4 file systems on* /dev/sda1* and* /dev/sda6.*

Also remind that all we are discussing now has nothing to do with the fact that the disk is created from a RAID. It is all about normal partitioning, creating file systems and mounting and not about RAID.

May be the output of

fdisk -l

and

cat /etc/fstab

might help.

I am going away for a few hours, but others may come here to help you further and else I will be back myself.

ok ok lets start again…
in the very beginning there was a raid of four disks installed on suse 11.2, 1Tb each, connected in raid… In the raid array I created a partition using the whole available space (about 2.83Tb)

after a few months I decided to try ubuntu and raid array was succesfully recognized and mounted.
Not satisfied from ubuntu I decided to install latest suse distro…and now here we are…

There are five hdd, installed… in the first

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e719b

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   234375167   117186560   83  Linux   <-- root partition
/dev/sda2       234377214   771971444   268797115+   5  Extended
/dev/sda3       771973120   772177919      102400    7  HPFS/NTFS/exFAT  <--winblows partition
/dev/sda4       772177920   976771071   102296576    7  HPFS/NTFS/exFAT  <--winblows partition
/dev/sda5       763585578   771971444     4192933+  82  Linux swap / Solaris  <-- /home
/dev/sda6       234377216   763584511   264603648   83  Linux

and then the raid


Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004eab5

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      321535      159744   fd  Linux raid autodetect
/dev/sdb2          321536    42266623    20972544   fd  Linux raid autodetect
/dev/sdb3        42266624  4294961151  2126347264   fd  Linux raid autodetect

Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdg doesn't contain a valid partition table

Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdh doesn't contain a valid partition table

Disk /dev/sdi: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004eab5

   Device Boot      Start         End      Blocks   Id  System
/dev/sdi1            3072      323583      160256   fd  Linux raid autodetect
/dev/sdi2          321536    42266623    20972544   fd  Linux raid autodetect
/dev/sdi3        42267648  4294963199  2126347776   fd  Linux raid autodetect

Disk /dev/mapper/nvidia_eaccbbae: 3000.6 GB, 3000614584320 bytes
16 heads, 63 sectors/track, 5814062 cylinders, total 5860575360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004eab5

                      Device Boot      Start         End      Blocks   Id  System
/dev/mapper/nvidia_eaccbbae1            2048      321535      159744   fd  Linux raid autodetect
/dev/mapper/nvidia_eaccbbae2          321536    42266623    20972544   fd  Linux raid autodetect
/dev/mapper/nvidia_eaccbbae3        42266624  4294961151  2126347264   fd  Linux raid autodetect

Disk /dev/mapper/nvidia_eaccbbae_part1: 163 MB, 163577856 bytes
255 heads, 63 sectors/track, 19 cylinders, total 319488 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/nvidia_eaccbbae_part1 doesn't contain a valid partition table

Disk /dev/mapper/nvidia_eaccbbae_part2: 21.5 GB, 21475885056 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41945088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x50d3b764

Disk /dev/mapper/nvidia_eaccbbae_part2 doesn't contain a valid partition table

Disk /dev/mapper/nvidia_eaccbbae_part3: 2177.4 GB, 2177379598336 bytes
255 heads, 63 sectors/track, 264717 cylinders, total 4252694528 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4756d91d

Disk /dev/mapper/nvidia_eaccbbae_part3 doesn't contain a valid partition table

cat /etc/fstab

Host-004:~ # cat /etc/fstab
/dev/disk/by-id/ata-Hitachi_HDP725050GLA360_GEB530RE0Y96SF-part5 swap                 swap       defaults              0 0
/dev/disk/by-id/ata-Hitachi_HDP725050GLA360_GEB530RE0Y96SF-part1 /                    ext4       acl,user_xattr        1 1
/dev/disk/by-id/ata-Hitachi_HDP725050GLA360_GEB530RE0Y96SF-part6 /home                ext4       acl,user_xattr        1 2
/dev/disk/by-id/ata-Hitachi_HDP725050GLA360_GEB530RE0Y96SF-part3 /windows/C           ntfs-3g    users,gid=users,fmask=133,dmask=022,locale=en_US.UTF-8 0 0
/dev/disk/by-id/ata-Hitachi_HDP725050GLA360_GEB530RE0Y96SF-part4 /windows/D           ntfs-3g    users,gid=users,fmask=133,dmask=022,locale=en_US.UTF-8 0 0
proc                 /proc                proc       defaults              0 0
sysfs                /sys                 sysfs      noauto                0 0
debugfs              /sys/kernel/debug    debugfs    noauto                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0

this is the actual situation… I’d like to mount /dev/mapper/nvidia_eaccbbae (3Tb partition) to /mnt/storage
you ask me if I remember exactly the filesystem type of my raid… well… ahem… now there are some doubts :slight_smile:

thank you for you patience :slight_smile:

row error… /home partition is obviously mounted on /dev/sda6, not on swap partition :slight_smile:

This is just to let you know I am alive and kicking. Was away to day. Thanks for the info. I think this is enough to study even if I do not like it when output is tinkered with. I like the whole input/output complete with prompt (even the prompt can tell things), the statement given (to show the command and all the options without you having the need to tell this) and of course what the computer said. And yes, you tinkered with it and did it wromg there. The mount you gave earlier and the /etc/fstab you give now do allready tell what is mounted where. No need to explain that wrongly :frowning:

/home partition is obviously mounted on /dev/sda6,

I guess you mean that the /dev/sda6 partition is mounted on* /home*.

I will take some time trying to find out about those partitions. It looks a bit complicated to me. A partitioned disk where all the partitions are glued together again in a RAID or something like that.

In the meantime, everybody that sees something obviously wrong here, please post :wink:

Hi Henk, don’t worry for the delay…

In the meantime, everybody that sees something obviously wrong here, please post

you’re the man. :wink:

As I said: complicated.

As for sda, I think everything is clear. let us forget about it.

Then there is sdb, which is partitioned into three: a small one, a middle sized one and a big one. They are all of type fd “Linux raid autodetect” and as such should be detected by the RAID software at boot.

Then let us jump to sdi. Seems to be the of the same type as sdb, and it is also partitioned into three, but the sizes are slighly different. All three parts of type fd.

Two disks, sdg and sdh, again of the same size, but no partitioning table.

Can you explain what these are? And what usage?

And also, where are sdc, sdd, sde and sdf? Normaly disk devices are simply “numbered” from sda without holes. Are there any udev rules specialy made for this system?

The further we come, the more riddles imho :frowning:

sda is ok…

sdb, sdg, sdh, sdi are the four hard disks in the raid array…

I never created three partitions in sdb, sizes are too strange, I suppose these partitions has been created by the raid software to offer raid 5 features… I suppose

I think the problem doesn’t reside in sdb,g,h,i, but the mapper and dmraid is doing something wrong…

I can’t format and create another array, here there are too much data I don’t want to loose

I remember times when we had people here with dmraid experience. Seems that they are on holiday :frowning:

In any case, I find it strange that there are fd partitions on two of them and nothing on the other two.

On Mon, 12 Sep 2011 19:46:03 GMT, hcvv <hcvv@no-mx.forums.opensuse.org>
wrote:

>
>I remember times when we had people here with dmraid experience. Seems
>that they are on holiday :frowning:
>
>In any case, I find it strange that there are fd partitions on two of
>them and nothing on the other two.

Much agreed. I worry that if the partition tables (disk labels) sdg and
sdh have been obliterated that OP will not be able to recover the data.
And i don’t know dmraid tools nor if normal parted can find any missing
raid partitions.

?-(

I am also afraid that we have a real problem here. And I do not even dare to ask the OP how old his last backup is. I try to stay kind to him, but he does not seem to have any documentation about his system, particularly about that RAID and most probably does not make backups, let alone an extra backup before he started to install a different distro/version/system. All major sins against good sysem managent. (please AhrbokTrexon, do not read the above).

Back to what we have so far.
As said earlier, I never used the dmraid tool (using LVM myself), but what are the statmenst to show all and everything the RAID software knows about those disks.

Also, you tried to mount* dm-3* where mount could not see a valid fs, what about* dm-2* and* dm-1*? The same? And you still seem to have the id that the whole disk what you should mount and not it’s partitions. I guess that is fom earlier experience, but I can not explain this.

I am particulary baffled by the fact that fdisk sees* /dev/mapper/nvidia_eaccbbae *as a disk, partitioned in three parts which have again the type fd. To me that looks likespriraling around.

Isn’t there anybody out there with a RAID config that can post his* fdisk -l *listing and some dmraid output so that we can make comparissons?

I am also afraid that we have a real problem here. And I do not even dare to ask the OP how old his last backup is. I try to stay kind to him, but he does not seem to have any documentation about his system, particularly about that RAID and most probably does not make backups, let alone an extra backup before he started to install a different distro/version/system. All major sins against good sysem managent. (please AhrbokTrexon, do not read the above).

it’s not funny… if you’re able to help… than help, otherwise ask someone who knows more than you, avoiding unnecessary considerations on the habits and skills of people you don’t know directly.

This is not funny and you should be aware that I am trying to get as much information as possible about your setup from you. But you did not provide any precise iinformation about the former set-up (I can show you a wealth of saved configuration files and usage information from todays installation and at least one earlier level of all of my systems) and you said “here there are too much data I don’t want to loose” from which I come to the conclusion that you have no backup. When this is not the case then please say so and I will appoligize for suggesting that you do not make backups.

Also you must be aware that I asked several times here if people with more dmraid knowledge could tune in. But I can not point to some of your and my fellow forums members and press him/her into helping.