How to mount a Raid1 disk originating from a different system

Hi I have a Raid 1 disk which I would like to access in a newly installed system to retrieve (obvious ) data.

What is the trick to mount the ssd ???

Thank and happy new year !

@ozotto just use the mount command should suffice, if you run lsblk what does it show?

localhost:/ # lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0 931.5G  0 disk  
├─sda1        8:1    0  20.5G  0 part  
│ └─md1       9:1    0  20.5G  0 raid1 
└─sda2        8:2    0   911G  0 part  
  └─md127     9:127  0   911G  0 raid1 
sdb           8:16   0 447.1G  0 disk  
└─sdb1        8:17   0 447.1G  0 part  
  └─md0       9:0    0 447.1G  0 raid1 /home/hase/RAID
sdc           8:32   0 447.1G  0 disk  
└─sdc1        8:33   0 447.1G  0 part  
sr0          11:0    1  1024M  0 rom   
nvme0n1     259:0    0 232.9G  0 disk  
├─nvme0n1p1 259:1    0     8M  0 part  
├─nvme0n1p2 259:2    0     2G  0 part  [SWAP]
├─nvme0n1p3 259:3    0    10G  0 part  
├─nvme0n1p4 259:4    0    40G  0 part  
├─nvme0n1p5 259:5    0    80G  0 part  /home/hase/help
├─nvme0n1p6 259:6    0   512M  0 part  /boot/efi
└─nvme0n1p7 259:7    0 100.4G  0 part  /var
                                       /usr/local
                                       /tmp
                                       /home
                                       /srv
                                       /root
                                       /opt
                                       /boot/grub2/x86_64-efi
                                       /boot/grub2/i386-pc
                                       /.snapshots
                                       /
localhost:/ # 


@ozotto Which one of those three incomplete raid setups is it?

@malcolmlewis … sorry , I should have been specific !

sda           8:0    0 931.5G  0 disk  
├─sda1        8:1    0  20.5G  0 part  
│ └─md1       9:1    0  20.5G  0 raid1 
└─sda2        8:2    0   911G  0 part  
  └─md127     9:127  0   911G  0 raid1 

The ssd disk sda2=>md127 is connected to the system via usb.

regards

I must admit I don’t know what I have done … it is a mess.

Anyway, I had the raid disk in question connected via SATA and thought I could mount like

=> mount /dev/sda2 /home/directory-name

that did not work because, so I suspect, the type “Raid” can not be mounted.

I assum that md devices are the RAID ones where the file system is on. Not the sd ones.

But lsblk -f will show if there are file systems, and where and of what type.

@ozotto what does mdadm --examine /dev/sda show?

Thend should be able to run for example mdadm --assemble /dev/md1 /dev/sda1 dev/sdX --run

Overview

@hcvv

localhost:/ # lsblk -f
NAME        FSTYPE            FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda                                                                                           
└─sda1      linux_raid_member 1.0   any:0 1e0f310b-f0a5-6940-2760-29f4f0baf809                
sdb                                                                                           
└─sdb1      linux_raid_member 1.0   any:0 1e0f310b-f0a5-6940-2760-29f4f0baf809                
  └─md0     xfs                           08e7b793-be27-48f4-9162-8c2b5f153d6f  122.4G    73% /home/hase/RAID
sdc                                                                                           
├─sdc1      linux_raid_member 1.0   any:1 47e534d9-ddd4-85c3-f849-4b5f8c588a4c                
│ └─md1                                                                                       
└─sdc2      linux_raid_member 1.0   any:0 fa4b731d-a95f-63aa-8a3b-b5439f1f725a                
  └─md127                                                                                     
sr0                                                                                           
nvme0n1                                                                                       
├─nvme0n1p1                                                                                   
├─nvme0n1p2 swap              1           85201cbc-681e-4276-b5bc-09ef0835de3a                [SWAP]
├─nvme0n1p3 swap              1           2c9ef43e-8624-4b0c-b19c-7fb5618907b9                
├─nvme0n1p4 btrfs                         be2572e5-0f89-4ea0-83ea-eb2775d07766                
├─nvme0n1p5 xfs                           a5f36def-f33e-4d9b-a55e-d4f5d333e5c3   57.6G    28% /home/hase/help
├─nvme0n1p6 vfat              FAT32       18E0-321B                             505.1M     1% /boot/efi
└─nvme0n1p7 btrfs                         8699890b-cc02-4526-8439-1a2637563ffd   64.1G    35% /var
                                                                                              /usr/local
                                                                                              /tmp
                                                                                              /srv
                                                                                              /root
                                                                                              /opt
                                                                                              /home
                                                                                              /boot/grub2/x86_64-efi
                                                                                              /boot/grub2/i386-pc
                                                                                              /.snapshots
                                                                                              /
localhost:/ # 

@malcolmlewis

localhost:/ # mdadm --examine /dev/sdc
/dev/sdc:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at            1 (type ee)
localhost:/ # 

localhost:/ # mdadm --assemble /dev/md/0_0 /dev/sdc dev/sdX --run
mdadm: Cannot assemble mbr metadata on /dev/sdc
mdadm: /dev/sdc has no superblock - assembly aborted
localhost:/ #

I suspect that we can not retrieve the file-system.
I did not reformat the disk ! So the data should be there. There are only a few files I would like to rescue (*flac and some pdf) . For that ? …how can I scan the disk and retrieve the files?

what are your thoughts ?

@ozotto Not is all lost, what about cat /proc/mdstat and mdadm --assemble --readonly /dev/md1 /dev/sdc1 --run and don’t change this…

@malcolmlewis

localhost:/ # cat /proc/mdstat
Personalities : [raid1] 
md127 : active raid1 sdc2[1]
      955287040 blocks super 1.0 [2/1] [_U]
      bitmap: 7/8 pages [28KB], 65536KB chunk

md1 : active (auto-read-only) raid1 sdc1[1]
      21474176 blocks super 1.0 [2/1] [_U]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sdb1[1]
      468859712 blocks super 1.0 [2/1] [_U]
      bitmap: 4/4 pages [16KB], 65536KB chunk

unused devices: <none>

localhost:/ # mdadm --assemble --readonly /dev/md1 /dev/sdc1 --run
mdadm: /dev/sdc1 is busy - skipping
localhost:/ # 

alternative

localhost:/ # mdadm --assemble --readonly /dev/md/0_0 /dev/sdc2 --run
mdadm: /dev/sdc2 is busy - skipping
localhost:/ # 

@ozotto So if you run mdadm --stop /dev/md1 then mdadm --assemble --readonly /dev/md1 /dev/sdc1 --run then cat /proc/mdstat If it shows md1 as active, should be able to mount mount /dev/md1 /mnt

@malcolmlewis

localhost:/ # 
localhost:/ # mdadm --stop /dev/md1
mdadm: stopped /dev/md1
localhost:/ # 
localhost:/ # mdadm --assemble --readonly /dev/md1 /dev/sdc1 --run
mdadm: /dev/md1 has been started with 1 drive (out of 2).
localhost:/ # 
localhost:/ # cat /proc/mdstat
Personalities : [raid1] 
md1 : active (read-only) raid1 sdc1[1]
      21474176 blocks super 1.0 [2/1] [_U]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active raid1 sdc2[1]
      955287040 blocks super 1.0 [2/1] [_U]
      bitmap: 7/8 pages [28KB], 65536KB chunk

md0 : active raid1 sdb1[1]
      468859712 blocks super 1.0 [2/1] [_U]
      bitmap: 4/4 pages [16KB], 65536KB chunk

unused devices: <none>
localhost:/ # 
localhost:/ # mount /dev/md1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md1, missing codepage or helper program, or other error.
       dmesg(1) may have more information after failed mount system call.
localhost:/ # 

I found the following command … maybe helpful

localhost:/ # mdadm -D /dev/md0
/dev/md0:
           Version : 1.0
     Creation Time : Sun Mar 17 14:18:03 2019
        Raid Level : raid1
        Array Size : 468859712 (447.14 GiB 480.11 GB)
     Used Dev Size : 468859712 (447.14 GiB 480.11 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Jan  2 10:59:48 2026
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : any:0
              UUID : 1e0f310b:f0a56940:276029f4:f0baf809
            Events : 10410

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       17        1      active sync   /dev/sdb1
localhost:/ # 
localhost:/ # mdadm -D /dev/md/0_0
/dev/md/0_0:
           Version : 1.0
     Creation Time : Thu Jan  1 11:20:12 2026
        Raid Level : raid1
        Array Size : 955287040 (911.03 GiB 978.21 GB)
     Used Dev Size : 955287040 (911.03 GiB 978.21 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Jan  1 18:08:37 2026
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : any:0
              UUID : fa4b731d:a95f63aa:8a3bb543:9f1f725a
            Events : 1413

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       34        1      active sync   /dev/sdc2
localhost:/ # 
localhost:/ # mdadm -D /dev/md1
/dev/md1:
           Version : 1.0
     Creation Time : Thu Jan  1 13:05:10 2026
        Raid Level : raid1
        Array Size : 21474176 (20.48 GiB 21.99 GB)
     Used Dev Size : 21474176 (20.48 GiB 21.99 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Jan  2 11:01:28 2026
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : any:1
              UUID : 47e534d9:ddd485c3:f8494b5f:8c588a4c
            Events : 41

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1
localhost:/ # 

and I tried …

localhost:/ # mount /dev/md/0_0 /home/hase/ext-ssd
mount: /home/hase/ext-ssd: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.
       dmesg(1) may have more information after failed mount system call.
localhost:/ # 

@ozotto mdadm -D /dev/md1 shows sdc1 as active and degraded, so mount /dev/md1 /mnt or if it’s sdc2 you want then use the process for that one after stopping.

@malcolmlewis

ok … I stopped md1 and md/0_0

I tried to mount md/0_0 …since I am trying to access sdc2

localhost:/ # mount /dev/md/0_0 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.
       dmesg(1) may have more information after failed mount system call.
localhost:/ # 


as you suggested further up I …

localhost:/ # mdadm --assemble --readonly /dev/md/0_0 /dev/sdc2 --run
mdadm: /dev/md/0_0 has been started with 1 drive (out of 2).
localhost:/ #

then I tried to mount

localhost:/ # mount /dev/md/0_0 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.
       dmesg(1) may have more information after failed mount system call.
localhost:/ # 
localhost:/ # mount /dev/md/0_0 /home/hase/ext-ssd
mount: /home/hase/ext-ssd: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.
       dmesg(1) may have more information after failed mount system call.
localhost:/ # 

the following seamed to be the problem

 wrong fs type, bad option, bad superblock on /dev/md127, missing codepage

I guess (!?) for the partition must be some sort of “header”-file. Is it possible to edit this file ?

You have to stop things first, then use /dev/md127 and /mnt don’t use anything but for the moment…

OK !
…stopped /dev/md/0_0 and /dev/md1 (md1 was not running)

localhost:/ # mount /dev/md127 /mnt
mount: /mnt: special device /dev/md127 does not exist.
       dmesg(1) may have more information after failed mount system call.
localhost:/ # 
localhost:/ # 

@ozotto you have to start it as well… look at the steps…

  • check cat /proc/mdstat
  • stop as required (I would stop everything)
  • assemble for the one device only
  • check mdstat output again to ensure only the one you want is running
  • mdadm -D /dev/mdXXX
  • then mount…

@malcolmlewis

…stopped md1 and md/0_0 (can not stop md0 => would the test system)
then…

localhost:/ # mdadm --assemble --readonly /dev/md/0_0 /dev/sdc2 --run
mdadm: /dev/md/0_0 has been started with 1 drive (out of 2).
localhost:/ # 
localhost:/ # cat /proc/mdstat
Personalities : [raid1] 
md127 : active (read-only) raid1 sdc2[1]
      955287040 blocks super 1.0 [2/1] [_U]
      bitmap: 7/8 pages [28KB], 65536KB chunk

md0 : active raid1 sdb1[1]
      468859712 blocks super 1.0 [2/1] [_U]
      bitmap: 4/4 pages [16KB], 65536KB chunk

unused devices: <none>
localhost:/ # 
localhost:/ # mdadm -D /dev/md/0_0
/dev/md/0_0:
           Version : 1.0
     Creation Time : Thu Jan  1 11:20:12 2026
        Raid Level : raid1
        Array Size : 955287040 (911.03 GiB 978.21 GB)
     Used Dev Size : 955287040 (911.03 GiB 978.21 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Jan  1 18:08:37 2026
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : any:0
              UUID : fa4b731d:a95f63aa:8a3bb543:9f1f725a
            Events : 1413

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       34        1      active sync   /dev/sdc2
localhost:/ # 
localhost:/ # mount /dev/md/0_0 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.
       dmesg(1) may have more information after failed mount system call.
localhost:/ # 

@ozotto but /dev/md127 is the device, your fixated on this /dev/md_0_0 device all the time? Check each command output after stopping devices…

Since your not showing ALL information, it’s hard to surmize, small steps are needed…