Unbootable system: missing voulme group

I have unbootable system with a missing vg and I am not sure how to fix it. There are several important documents, scripts on the instance.

Overview -


# fdisk -l /dev/sda
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: TOSHIBA MK1002TS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C68C7C11-349D-4555-A7CE-AE7BA5FE57B9

Device         Start       End   Sectors  Size Type
/dev/sda1       2048    526335    524288  256M EFI System
/dev/sda2     526336 703072255 702545920  335G Linux LVM
/dev/sda3  703072256 733792255  30720000 14.7G Linux swap

# file -s /dev/sda2
/dev/sda2: LVM2 PV (Linux Logical Volume Manager), UUID: yXvt38-LxRP-UDEI-d4GE-pbk8-hypy-gdwefU, size: 359703511040

# pvs
  PV         VG     Fmt  Attr PSize   PFree  
  /dev/sda2         lvm2 ---  335.00g 335.00g
  /dev/sdc8  deb_lv lvm2 a--  183.12g      0 

# lvmdiskscan
  /dev/deb_lv/deb_root           <47.50 GiB] 
  /dev/sda1                      256.00 MiB] 
  /dev/deb_lv/deb_tmp              9.31 GiB] 
  /dev/sda2                      335.00 GiB] LVM physical volume
  /dev/deb_lv/deb_var             37.25 GiB] 
  /dev/sda3                      <14.65 GiB] 
  /dev/deb_lv/deb_log            <27.94 GiB] 
  /dev/deb_lv/deb_audit_log        2.79 GiB] 
  /dev/deb_lv/deb_home           <58.34 GiB] 
  /dev/mapper/s_r                <77.00 GiB] 
  /dev/sdb1                      256.00 MiB] 
  /dev/sdb2                        2.00 GiB] 
  /dev/sdb3                       77.00 GiB] 
  /dev/sdb4                        5.00 GiB] 
  /dev/sdb5                       41.00 GiB] 
  /dev/sdb6                       22.00 GiB] 
  /dev/sdb7                        2.50 GiB] 
  /dev/sdb8                        5.00 GiB] 
  /dev/sdb9                       21.00 GiB] 
  /dev/sdb10                      20.00 GiB] 
  /dev/sdb11                      55.00 GiB] 
  /dev/sdb12                      16.00 GiB] 
  /dev/sdb13                     664.76 GiB] 
  /dev/sdc1                       30.00 GiB] 
  /dev/sdc2                      516.51 GiB] 
  /dev/sdc3                      200.00 MiB] 
  /dev/sdc4                      185.00 GiB] 
  /dev/sdc5                       15.00 GiB] 
  /dev/sdc6                      191.00 MiB] 
  /dev/sdc7                        1.49 GiB] 
  /dev/sdc8                     <183.13 GiB] LVM physical volume
  /dev/sdd1                       <2.73 TiB] 
  7 disks
  23 partitions
  0 LVM physical volume whole disks
  2 LVM physical volumes


Please help me fix this, I will buy you a beer.

There’s not enough information.

What happens if (as root) you use the command:

vgchange -a y

In particular, does that change what you see with “/dev/mapper”?

It just shows debian vg on /dev/sdc8.I can boot into that but not the Leap. Now I have dd’ed and grabbed 255, 512 byte sectors from the start of /dev/sda2, opened file with vim for figuring out vgs. I can possibly fix it this way but there are several vgs inside and may take a few hours or days due to panic at my end.


# vgchange -a y
  6 logical volume(s) in volume group "deb_lv" now active

I will prefer a quick fix if possible.

The metadata is intact -

# pvck /dev/sda2
  Found label on /dev/sda2, sector 1, type=LVM2 001
  Found text metadata area: offset=4096, size=1044480

I don’t have an easy answer. Obviously the volume group is damaged in some way. If you have been making backups of the data, then now is the time to use those backups.

I’ve the backups til 20th of Aug. Have dd’ed the broken instance. Lost about 2 weeks of work, will keep working on recreating the vg and see how that goes.