Hello everyone!
I have been using a RAID1 for more than 2 years without any kind of problem in openSuSE. Some year ago, I moved to Tumbleweed, and inherited the RAID in its initial configuration without any glitches. Once I got one power connector in one of the disks accidentally removed and after solving the problem everything worked perfectly. I could define myself as a proud and satisfied md user.
This week, after one Tumbleweed update (openSUSE-release-20170626-1.2.x86_64) the array did not mount anymore. In fact it had disappeared from YasT. Using YasT I redefined it (without formatting partitions!) and manually made:
mdadm --assemble /dev/md0
what again worked well. However, anytime I try to mount it, I get the same error:
# mount /dev/md0 /mnt/INOUT
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
These are the results of some diagnosing commands:
# cat mdadm.conf
DEVICE containers partitions
ARRAY /dev/md0 UUID=5f8c1470:011b45c5:a2b2f6f9:d50cf7b9
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Mon Jun 26 23:50:15 2017
Raid Level : raid1
Array Size : 1953513280 (1863.02 GiB 2000.40 GB)
Used Dev Size : 1953513280 (1863.02 GiB 2000.40 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jun 27 07:28:22 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : any:0
UUID : 5f8c1470:011b45c5:a2b2f6f9:d50cf7b9
Events : 4131
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
# mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 5f8c1470:011b45c5:a2b2f6f9:d50cf7b9
Name : any:0
Creation Time : Mon Jun 26 23:50:15 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3907026912 (1863.02 GiB 2000.40 GB)
Array Size : 1953513280 (1863.02 GiB 2000.40 GB)
Used Dev Size : 3907026560 (1863.02 GiB 2000.40 GB)
Super Offset : 3907026928 sectors
Unused Space : before=0 sectors, after=352 sectors
State : clean
Device UUID : f369afb2:dae809b7:8b3d710d:9b3cc377
Internal Bitmap : -16 sectors from superblock
Update Time : Tue Jun 27 07:28:22 2017
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 28e8ee90 - correct
Events : 4131
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
# mdadm -E /dev/sdc1
/dev/sdc1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 5f8c1470:011b45c5:a2b2f6f9:d50cf7b9
Name : any:0
Creation Time : Mon Jun 26 23:50:15 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 3907026912 (1863.02 GiB 2000.40 GB)
Array Size : 1953513280 (1863.02 GiB 2000.40 GB)
Used Dev Size : 3907026560 (1863.02 GiB 2000.40 GB)
Super Offset : 3907026928 sectors
Unused Space : before=0 sectors, after=352 sectors
State : clean
Device UUID : baf97132:a11d8af0:4257a807:97048efb
Internal Bitmap : -16 sectors from superblock
Update Time : Tue Jun 27 07:28:22 2017
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 602d94d2 - correct
Events : 4131
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[0] sdc1[1]
1953513280 blocks super 1.0 [2/2] [UU]
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices: <none>
Unfortunately I cannot remember how did I initially formatted the partitions, but I am completely sure it cannot be othe than btrfs or ext4.
And I am quite sure (but not fully) there is no LVM on top of of the RAID. (What to do for testing it?)
I like to (preferibly) mount the RAID again or alternatively, mount as a loop device one of the partitions for extracting the data before reformatting. I’ve tried this but it seems I need a parameter out from any of the ‘mdadm --E /dev/sd**1’ (DataOffset) which is not shown in my version or status.
Any help will be very appreciated.
Pablo G**