QEMU/KVM guest cannot find NICs after `qemu-img resize` of main disk image

I resized the main disk image of one of my QEMU/KVM VMs, to enlarge the (btrfs) root partition, and now the guest VM somehow lost the ability to find the virtual NICs, so the VM is without network.

I restored a backup image I made before virt-resize (but unfortunately after qemu-img resize) and that didn’t work either - same problem.

I tried adding different NICs, including type e1000e, rather than the default virtio type, but none of them are found by the system.

I use wicked on this VM and it complains about the missing device on startup:

localhost wicked[780]: enp1s0       no-device

… but I think that only means that there’s a config for it left over, and that now it can’t find the device.

Host and VM are both running Tumbleweed.

ip link show only lists the loopback device.

I also tried cloning the VM, but the cloned one has the same problem.

Other, older, VMs on the same host do not experience this issue, so I really think it’s something to do with the disk image… but that’s just super weird, right?

Hi and welcome to the Forum :slight_smile:
Are you using virt-manager for your virtual machines? How many interfaces on the system? How was networking set up on the host for the virtual machines?


ip addr
virsh net-list --all

Yes, I mostly use virt-manager on this workstation. I can do virsh, in a pinch (or on a server), but I’m not fluent, so I prefer the GUI.

> How many interfaces on the system?

On the VM: two, one of type virtio, which was auto-added when I created the VM (and used to work just fine), one of type e1000e, which I added later on a hunch.

On the host: one physical, several virtual:


2: **enp0s25: **<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state **UP **group default qlen 1000 
    link/ether **48:4d:7e:da:ac:21** brd **ff:ff:ff:ff:ff:ff**
    inet **192.168.178.49**/24 brd **192.168.178.255 **scope global dynamic noprefixroute enp0s25 
       valid_lft 4232sec preferred_lft 4232sec 
    inet6 **2a02:810d:a040:2a0::147e**/128 scope global dynamic noprefixroute  
       valid_lft 4262sec preferred_lft 1562sec 
    inet6 **2a02:810d:a040:2a0:ae76:d322:4cd0:32f4**/64 scope global temporary dynamic  
       valid_lft 86397sec preferred_lft 14397sec 
    inet6 **2a02:810d:a040:2a0:4dc5:6083:e7a4:e975**/64 scope global dynamic mngtmpaddr noprefixroute  
       valid_lft 86397sec preferred_lft 14397sec 
    inet6 **fe80::8e76:48eb:b7e1:5aa1**/64 scope link noprefixroute  
       valid_lft forever preferred_lft forever 
3: **virbr0: **<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state **UP **group default qlen 1000 
    link/ether **52:54:00:1f:0b:7c** brd **ff:ff:ff:ff:ff:ff**
    inet **192.168.122.1**/24 brd **192.168.122.255 **scope global virbr0 
       valid_lft forever preferred_lft forever 
7: **docker0: **<NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state **DOWN **group default  
    link/ether **02:42:31:e0:31:9f** brd **ff:ff:ff:ff:ff:ff**
    inet **172.17.0.1**/16 brd **172.17.255.255 **scope global docker0 
       valid_lft forever preferred_lft forever 
10: **vnet5: **<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000 
    link/ether **fe:54:00:f0:7d:36** brd **ff:ff:ff:ff:ff:ff**
    inet6 **fe80::fc54:ff:fef0:7d36**/64 scope link  
       valid_lft forever preferred_lft forever

There are a couple of leftover KVM networks from earlier experimentation, but they are disabled and not in use:


 Name       Status    Automatischer Start   Bleibend 
------------------------------------------------------ 
 NAT-TUN    Inaktiv   nein                  ja 
 NAT-WLAN   Inaktiv   nein                  ja 
 vnet0      Aktiv     ja                    ja

vnet0 is the one that’s actually used and that the virtual devices are assigned to.

> How was networking set up on the host for the virtual machines?

vnet0 is configured as a NAT network using the 192.166.122.0/24 subnet, providing DHCP in the 2 - 254 range and connected to the virbr0 device on the host (cf. above).

As I wrote, I have other VMs on this host and their networking is still working fine, so I don’t think it’s a problem with the the host’s config.

Hi
I would destroy and delete those two, just in case, so can you compare the other hosts in virt-manager and check all look the same?


virsh net-undefine --network NAT-TUN
virsh net-undefine --network NAT-WLAN

You might also check on the guest OS and delete the interface and see if it gets created on reboot, the guest os showing the enpls0 is?