Assembling a mdadm device on bootup and mounting it

I’ve installed 3 x WD1502FAEX hard drives in a RAID 5 array. I boot my openSUSE installation off a different hard drive.

1 of the device is on the 6Gbps Intel P67 SATA controller and 2 more are present on the 6Gbps Marvel 88SE9128 controller. I’ve disabled hardware RAID on all of them.

I created the array in raid level 5 using mdadm to use /dev/sdb1 /dev/sdc1 and /dev/sdd1, waited for it to finish the creation and then formatted the device /dev/md0 for an ext4 partition.

I then sent the output of mdadm --detail --scan to my /etc/mdadm.conf. This file looks like:

DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=MaximusPrimeRaidArray UUID=b518e076:2cc8510e:2bd58f60:dddb9ca8

I’m able to assemble the array if I run mdadm --assemble --scan but the array doesn’t automatically get assembled on bootup. I notice /dev/md0 doesn’t get created on bootup until I run the --asemble command.

Also during bootup when I add a line to mount in /etc/fstab I get a “Waiting for /dev/md0” during bootup - roughly waits for a minute and then proceeds.

One more thing I noticed is during the assembling using mdadm I see the following log lines in dmesg:

  458.529221] md: md0 stopped.
  458.530622] md: bind<sdc1>
  458.530711] md: bind<sdd1>
  458.530802] md: bind<sdb1>
  458.532891] bio: create slab <bio-1> at 1
  458.532910] md/raid:md0: device sdb1 operational as raid disk 0
  458.532913] md/raid:md0: device sdd1 operational as raid disk 2
  458.532915] md/raid:md0: device sdc1 operational as raid disk 1
  458.533341] md/raid:md0: allocated 3230kB
  458.533391] md/raid:md0: raid level 5 active with 3 out of 3 devices, algorithm 2
  458.533393] RAID conf printout:
  458.533394]  --- level:5 rd:3 wd:3
  458.533396]  disk 0, o:1, dev:sdb1
  458.533398]  disk 1, o:1, dev:sdc1
  458.533400]  disk 2, o:1, dev:sdd1
  458.533428] md0: detected capacity change from 0 to 3000598790144
  458.546288]  md0: unknown partition table

I even added raid456 to INTRD_MODULES in my /etc/sysconfig/kernel and rebuilt initrd using mkinitrd but didn’t help.

All the devices in the array are active and I don’t face any problems except for the need to manually mount them each time I boot.

It would be very helpful if someone could tell me what I’m missing here. Let me know if you need any other kind of output from mdadm or any of the config files. Thanks.

The contents of my /etc/sysconfig/mdadm looks like:

MDADM_DELAY=60
MDADM_MAIL="root@localhost"
MDADM_PROGRAM=""
MDADM_RAIDDEVICES=""
MDADM_SCAN=yes
MDADM_CONFIG="/etc/mdadm.conf"
MDADM_SEND_MAIL_ON_START=no
MDADM_DEVICE_TIMEOUT="60"
BOOT_MD_USE_MDADM_CONFIG=yes

Lines added to /etc/fstab

/dev/md0                                                      /data                ext4       defaults              1 0

UPDATE: I fixed this problem though in a strange fashion and still dont know what was the change required :frowning:

What I did was opened up YaST partitioner and it showed the RAID partitions as well as the md0 device. I opened up md0 device and saw that it was set to mount to “/data*” - I didn’t quite understand what the asterik meant. Anyways I tried to remove the mount point and add it again - just edited the mount point field. I then saved the changes and rebooted and now I see that it was able to mount the device at bootup - no issues at all.

I checked YaST partitioner again and now the asterik after /data was gone. :\

I checked /etc/mdadm.conf as well as my /etc/fstab - mdadm.conf remained unchanged and the line in /etc/fstab changed to this:

/dev/md0                                                      /data                ext4       acl                   1 0

I doubt the change in /etc/fstab would have been the reason for the fix because I changed it as follow and it still works fine. I would still like to know what was the change the YaST partitioner made to make this setup work.

/dev/md0                                                      /data                ext4       acl                   1 2

One more thing is this log during bootup taken from dmesg, shows the same unknown partition table error:

   11.388715] device-mapper: uevent: version 1.0.3
   11.388840] device-mapper: ioctl: 4.18.0-ioctl (2010-06-29) initialised: dm-devel@redhat.com
   11.662815] md: md0 stopped.
   11.663915] md: bind<sdc1>
   11.664015] md: bind<sdd1>
   11.664095] md: bind<sdb1>
   11.668767] bio: create slab <bio-1> at 1
   11.668776] md/raid:md0: device sdb1 operational as raid disk 0
   11.668777] md/raid:md0: device sdd1 operational as raid disk 2
   11.668778] md/raid:md0: device sdc1 operational as raid disk 1
   11.668961] md/raid:md0: allocated 3230kB
   11.668976] md/raid:md0: raid level 5 active with 3 out of 3 devices, algorithm 2
   11.668977] RAID conf printout:
   11.668978]  --- level:5 rd:3 wd:3
   11.668979]  disk 0, o:1, dev:sdb1
   11.668980]  disk 1, o:1, dev:sdc1
   11.668980]  disk 2, o:1, dev:sdd1
   11.668993] md0: detected capacity change from 0 to 3000598790144
   11.683305]  md0: unknown partition table

I’d like to know how serious of a problem this is or is it something I should have taken care of while creating the array. I can still recreate the array as I don’t have much data on it as of now. I’d like to fix this problems earlier rather than after filling up my RAID volume. Thanks.

I guess (it is a guess because I do not know much about these RAIDs, however I know LVM and there are some similarities), that it is only a warning.
/dev/md0 being a storage device, you could have partitioned it. Which is not what you have done. You have used the whole device for your ext4 file system. Thus, on checking, no valid partition table is found, which is as expected.

I get the following on a system where I have the three disks you see here, used for LVM. I have destroyed their partition tables by putting LVM information on them with pvcreate.

backup:~ # dmesg|grep partition
 hdb: unknown partition table
 hdc: unknown partition table
 hdd: unknown partition table
backup:~ #

Thus, the same as you get. The same as you would get by creating a file system on a whole physical disk (like* /dev/sdb*) without partitioning it.

Just a warning, or even less. I would say: just a message.

Yes kinda makes sense. Is it advisable to create a DOS/GUID (DOS might not handle huge partition sizes) partition table and then create partitions (even) when you’d just like to create a single partition ? Any pros/cons here ? One con that comes to my mind immediately for creating partitions is that resizing the filesystem (upon adding new disks into the array/removing disks from the array) requires partitioning tools rather than resize2fs kinda utilities.

As partitioning is invented when the need to cut the physical disk into better usable pieces was felt, I think it is a bit idiot to use partitioning when you do not need to cut a disk into pieces.

And as volume manager types of software have their own way of cutting the logical space they have into
logical pieces, I do see no need to use partitioning and logical volume management together.

But as an excersise you could:
a) partition a disk into five partitions (primary and logical);
b) then create LVM physical volumes on each of them;
c) then create one LVM volume group out of these physical volumes;
d) then create four LVM logical volumes on that volume group.
You still know what you are doing then and what influences your performance most?

But all sorts of combinations are of course possible. It depends (as so many things) on your needs.
You could e.g. partition a big disks into two same size partitions. Then use those partitions in a mirroring environment. Would protect you against bad tracks, but not against down time due to a complete failure of the disk. Better would be mirroring two (or more of course, but we keep it simple here) different disks, on two diffent busses on two different power units (two different electricity companies?). And when the disks are then hot swappable, you could come near a 7x24 service. Again, depends on your needs (and promises to your customers). lol!

And of course resizing you file system is allwqays needed, well when you resize the underlaying container, being it a partition or a logical volume.

Hi,

Most likely, in the initial case, the RAID partitions’ filesystem type was not correctly set.

If you assign a filesystem type of fd (Linux raid autodetect) then the kernel will auto-start a RAID array composed of elements having that filesystem type.

When the OP went into the openSUSE partitioning utility it probably correctly assigned the filesystem type to the component partitions.

Regards,
Neil Darlow

I am a bit at loss here. I thought we had just found out that there was no partitioning involved at all, but that the whole (unpartitioned) disk was used. And that the “message” just meant that.
And of cours, when there is no partition, there can’t be any partition type connected to it. Or do I completley misunderstand you?

He’s referring to /dev/sdb1, sdc1, and sdd1 not being set to “fd” (RAID auto-detect) as the partition ID. For the RAID block device, /dev/md0, ext4 is the file system that the data is stored on. Two different things, entirely. The whole block device (md0) was used, hence the harmless message.

However, when he initially created his RAID devices (sdb1, sdc1, sdd1), he might not have set them up as “fd” (RAID auto-detect), but instead left them at “83” (Linux). That’s why he had to manually assemble the RAID array every time.

At least, that’s what I understand.

Ideally, it looks something like this:
/dev/sdb1 has a partition ID of “fd”
/dev/sdc1 has a partition ID of “fd”
/dev/sdd1 has a partition ID of “fd”

/dev/sdb1, sdc1, and sdd1 assemble into a RAID array known as /dev/md0

/dev/md0 is a block device formatted with an ext4 file system

Now I understand. Yes, I concentrated on the “message” and not on the other (and real) problem.
In line with what I said above about “knowing about LVM and not knowing about his RAID”.
I also think an fdisk -l could have helped here to identify that wrong type error. As with every disk usage problem, an* fdisk -l *is a **must. **But I did not aks for it because I concentrated on the “message”.

Having gone delivering a letter at the other end of the town by byke. That gives one some time to contemplate. And I got a bit hungry for further information.
Now it seems that we have a few people on this thread who know more about this RAID feature, I dare to ask.

My question is a bit theoretical (the* fdisk -l *listing of the OP still missing and probably that stays so because the situation has apparently changed since post #1 above), Eleborating on the suggestion of @flansuse of having an sdb1, sdc1 and sdd1, why would one create those partitions at all. I asume (but am not sure) that there will be no sdb2, etc partitions on those disks. In any case, my idea, when I wanted to create a RAID on some disks, would be to use the whole disks and not create but one partition on them first. Would that work as intended even if there is no underlying partition and thus no underlying partition usage type? In other words, would the mentioned RAID auto-detection take place or not?

In the LVM case I showed above in post #3, the disks have no partition, but LVM finds it’s Physical and Logical parts in it’s own way of course and does not depend on an partition type of 8e Linux LVM. But that type does exist.

I think a partition is expected even if it is just one and covers the entire disk. It would certainly be possible to have no formal partition, but modern OS’s assume a partitioning system. But a partition is really nothing more then a small table that tells the OS the start/stop points of the “partitions”. There are no fences or anything.

I know what a partition table is, and I am not that critical about the space lost by a dummy one on a multi GiByte disk, it is more a principle of why having an extra layer (to administer) which you do not need. But I deny that modern OS’s assume partitioning. As least when you call Linux a modern OS.

For a Unix/Linux system all devices are files. In the case of disk, we have the block device special file like /dev/sdc for the whole disk and like /dev/sdc1 for a partition of it. There is no difference in the usage of it from the OS point of view:
. I can create an ext4 (or any other) file system on both with mkfs -t ext4 (well, you can’t have them both at the same time ;).
. I can use both as Physical Volumes in LVM (as shown above, though pvcreate will only create on the whole disk when it sees no partition table, out of precaution).
. Every program that does block I/O can read/write to both, think of dd as an example.

Thus my question realy is: is this RAID feature an exception (not due to asumptions about modern OS’s, but due to build in tests and restrictions (the pvcreate precaution action above is also such an buildin test/restriction).

It is quite possible that none of the posters here know. Simpy because they never needed it and/or tested it (neither did I). The more because the story that a disk must be partitioned is a widespread urban legend.

Okay just to clarify:

  1. I didn’t use any GUI utility for partitioning (as I usually prefer command line) - hence just used fdisk, created just a single partition on each of the drives, set the type to 0xfd (without which auto mount would not work at all like now). Sorry I should have posted the output of “fdisk -l” but I assumed there would be no issue with the partitioning of the actual disks itself. I do not have access to the home PC as of now - if you want more info with regards to fdisk I can post in the evening.
  2. Then I created a RAID array with each of these single partitions (sdb1, sdc1, sdd1) with a chunk size of 512k.
  3. Was able to then format the raid device’s single partition /dev/md0 to ext4 explicitly specifying the stride and stripe-width:
    mkfs.ext4 -b 4096 -E stride=128,stripe-width=256

Ok the two things now I’m puzzled about are:

  1. What did YaST partitioner actually change to make the mounting work properly ?
  2. Do I really have to care about the “md0: unknown partition table” error/warning in the kernel logs ? IOW should I run fdisk on /dev/md0 - to create a partition table and create a partition which might occupy the entirety of the available space ?

UPDATE: I found many people with such similar errors:
https://bugs.gentoo.org/271514?id=271514

http://marc.info/?l=linux-raid&m=125797242110594&w=2

This one (from the last link above) seems to answer best:

Actually, it doesn’t matter if you make your md device out of whole
disks or not, it’s the attempt to find a partition table on the md
device itself that is generating this message. This is a side effect of
the recent kernel change to allow any block device to be partitioned and
merely indicates that no partition table was found on your md device.
It’s normal. In fact, it might be worth suggesting that the message go
away. If there’s no partition table people generally know that because
there are no partitioned dev files :wink:

That must have been an interesting bike ride!

As for using the entire disks in the RAID array, you can do that; it won’t stop you. But it’s “recommended” to always use partitions and set their type to “fd”. This may explain the lack of auto-assembly, and it’s nice to know that you followed the software RAID approach “by the book” so that others are likely to be on the same page as you. It also gives you the flexibility to only use part of the disk in a RAID array, and the rest of the disk for other means, if you wish to get that advanced in your setup. So really, there’s no rule that says you can’t use the entire disk. I, personally, partition (even if it’s only a single partition) and set the type to “fd” before proceeding. But, it’s all preference, really.

Auto-assembly may even work without setting the partition IDs to “fd”, perhaps by specifying the exact block devices under /etc/mdadm.conf (/dev/sdb1, /dev/sdc1, /dev/sdd1), rather than using the UUID.

With software RAID, LUKS, and LVM, the combinations and possiblities are nearly endless. That’s why I find it can become very confusing.

As for your comment about LVM, I have used encrypted devices as physical volumes (to be used in a volume group), and it automatically sets up when I boot the system. So I guess it just scans all block devices when you boot up, looking for LVM physical volumes? Maybe someone more knowledgeable can jump in to clarify this.

On #2: definitely not! It is just a message, it is not an error/warning. If everything you see in dmesg would be an error/warning, you will have a lot to worry about. And it is completly nonsense to create a partition table on md0.

I think you’re mistaken here - what he means is - why should the raid block device itself /dev/md0 need to be partitioned (he ain’t talking about partitioning the individual disks part of the RAID volume).

Anyways checkout my previous post - should answer my partitioning setup.

Thanks everyone for their replies in the thread - :slight_smile:

@ash25. I think it is clear to us you used disks that have one partition (and thus do not show the “message”) to create a “virtual” disk named md0, that is itself unpartitioned (hence the “message”).

What you did is all rather normal (though I can not explain your original problem). It is only a fact that because of that problem you read carefully all dmesg messages and that made you conspicuous where you shouldn’t :slight_smile:

The quote you have says in fact the same I say: just a message. I do not know how old that quote is, but my openSUSE 10.3 system with kernel **2.6.22.19-0.4 **has it.

I was responding to the following:

Eleborating on the suggestion of @flansuse of having an sdb1, sdc1 and sdd1, why would one create those partitions at all. I asume (but am not sure) that there will be no sdb2, etc partitions on those disks. In any case, my idea, when I wanted to create a RAID on some disks, would be to use the whole disks and not create but one partition on them first. Would that work as intended even if there is no underlying partition and thus no underlying partition usage type? In other words, would the mentioned RAID auto-detection take place or not?

I wasn’t referring to the dmesg warning about /dev/md0 and the lack of partitions or a partition table.

What happened is that two discussions sort of took place within this one thread, so it might have caused some confusion back and forth.

flansuse wrote:
> ash25;2357729 Wrote:
>> I think you’re mistaken here - what he means is - why should the raid
>> block device itself /dev/md0 need to be partitioned (he ain’t talking
>> about partitioning the individual disks part of the RAID volume).
>
> What happened is that two discussions sort of took place within this
> one thread, so it might have caused some confusion back and forth.

Coming in a bit late here but there are two points I’d like to mention.

Firstly, on the subject of ‘to partition or not to partition’, there’s
no right answer. But I do use partitions and my reason has not been
mentioned, so here it is: I have previously had to repair LVM volumes
inside RAIDs built on unpartitioned devices because the start of the
disk has got corrupted when messing about recovering failed disks and
installing grub etc. The good news is that it’s possible to do these
kind of repairs thanks to backup superblocks etc, but IMHO it’s easier
to avoid the whole problem by having a partition table in place, and in
my case a small padding partition at the start (and end) of the disk.
There’s more explanation at:

https://raid.wiki.kernel.org/index.php/Partition_Types

Secondly, that link also mentions that there is another partition type
0xDA that can have advantages over 0xFD in some circumstances.

Cheers, Dave

Thanks @dhj-novell for the information and link.

This brought me into looking a bit at man mdadm. On of the things that draw my attention:

–auto-detect
Request that the kernel starts any auto-detected arrays. This can only work if md is compiled into the kernel — not if it is a module. Arrays can be auto-detected by the kernel if all the components are in primary MS-DOS partitions with partition type FD, and all use v0.90 metadata. In-kernel autodetect is not recommended for new installations. Using mdadm to detect and assemble arrays — possibly in an initrd — is substantially more flexible and should be preferred.

If which I want to point particulary to: “In-kernel autodetect is not recommended for new installations.” This would also indicate that type fd is not very usefull anymore. But, as said, I did my merging of three disks into one file system container using LVM (not because of any genial thinking, but because I knew LVM from my HP-UX days and I was not aware of the md oprion) and thus only saw this man page for the first time, thus I this may be a nonsense observation.