[SOLVED] QEMU VM does not get a DHCP IPv4 address on startup

TonyArr

Member
Oct 27, 2021
9
0
6
Hi all,

I have a couple VMs set up in Proxmox, which do not get a IPv4 address after booting until I log in via the console and restart the networking in SystemD.
They are set to receive a IP address via DHCP, which are all static leases by MAC address in a ISC DHCPd server, running on a different physical host in the same subnet and VLAN as the virtual machines as well as Proxmox itself.

I have a NUC which I installed and set up the same way (OS, on boarding configuration, software), and it has no problem getting a IP address on startup.

They are all Debian 12 VMs, QEMU based. Proxmox config shows them as i440fx machine types, with the networking device set to "VirtIO (paravirtualised)", bridged to Proxmox's vmbr0.

My /etc/network/interfaces file in each appears as:
Code:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

sourcve /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug ens18
iface ens18 inet dhcp

I have nothing in my interfaces.d directory.

When the VM starts, it gets a IPv6 address on ens18, which I assume is self assigned, as my network does not have any IPv6 infrastructure (I know, my ISP does not support IPv6 so I figured I'll keep my network v4 only until they finally agree to support v6 or another ISP finally catches up with their performance/download limit/cost ratio).

When I run systemctl restart networking.service , lo and behold, I get the correct IPv4 address and everything is fine until the next time I need to shut down or restart the VM (or Proxmox as a whole). I've had the switch that the Proxmox host is connected to lose power previously, and the VMs of course recognised the loss of network connectivity, but when the switch was powered back up, I didn't need to reset the VMs networking for any of them, they just got back to it.

It's not a huge huge issue, since I only need to do anything that involves a reboot of any of this maybe once a quarter, but it is undesirable and I don't always remember to reset the networking in them at the time, at least until I realise I can't access one I need...

Anyone have any hints to why this might be happening, Pointers of what logs to look for in syslog would be good as well. I don't see any "failures" or anything else obvious to me...
 
allow-hotplug ens18
Try & change that to:
Code:
auto ens18
AFAIK the difference between auto & allow-hotplug determines whether the interface is only activated on a HW hotplug change. So if you choose allow-hotplug it will actually wait until it detects a change in the interface, which you were in fact doing by restarting the networking.
 
  • Like
Reactions: TonyArr
IDK your NUC's HW setup, but the reason it works there is probably based on the different HW NIC used there. If it is a USB NIC that would be a simple explanation.
 
Try & change that to:
Code:
auto ens18
AFAIK the difference between auto & allow-hotplug determines whether the interface is only activated on a HW hotplug change. So if you choose allow-hotplug it will actually wait until it detects a change in the interface, which you were in fact doing by restarting the networking.
Well I was skeptical, but that did it!

Interfaces marked "allow-hotplug" are brought up when udev detects them. This can either be during boot if the interface is already present, or at a later time, for example when plugging in a USB network card.
I took the above (from man interfaces) to mean that a NIC that is present at boot would be brought up. I stuck with it as default so that I could remove and add interfaces (so for example I could add or remove VLANs from VMs without the guest OS having awareness of it), but I changed it to verify just in case and it does indeed now behave as expected.

So I guess I'll change my standing configuration, at least for interfaces I need from boot.

IDK your NUC's HW setup, but the reason it works there is probably based on the different HW NIC used there. If it is a USB NIC that would be a simple explanation.
Its an Intel i225 if I recall correctly, integrated on the main board and connected via the PCIe bus.

I guess udev sees things a little differently as things are loaded for a VM than on bare metal, cause I've checked the NUC's interfaces file again and it is definitely allow-hotplug.

Thanks for solving the mini-mystery though!
 
it is definitely allow-hotplug
Its definitely possible that it works even for a PCI NIC, it's all a question of timing etc. So with the VM's VirtIO NIC it did not work.

But I do not see why you would want/need allow-hotplug on a motherboard NIC (or the VM's VirtIO) . I'd change it on the NUC too.
 
Its definitely possible that it works even for a PCI NIC, it's all a question of timing etc. So with the VM's VirtIO NIC it did not work.

But I do not see why you would want/need allow-hotplug on a motherboard NIC (or the VM's VirtIO) . I'd change it on the NUC too.
In theory it should work for all NICs, regardless of how they are connected, since they can be turned off by subsystems at any time (say, uEFI).

I'm guessing that in hardware, the NICs go from being functionally off while the BIOS or uEFI has a hold of them to allow booting an OS over them, to being functionally on once the Kernel takes the NICs over.
However in a Virtual Machine, I am guessing they are just always functionally on for the OS since it's all virtual and so there isn't that "exclusivity" like you have with hardware, so there is no udev event of them turning on occurring, therefore if-up wasn't being called.

(I'd need to know more in granular detail of how QEMU works to say that's exactly right, but it makes sense given that's the change that fixed it, and now I know what the problem was, I can see where things diverge between the syslog on my NUC and the syslog in the VMs)

And I agree, I don't need it set that way on my NUC. The install there was just a duplication of what was in the VM while I was trying to figure out exactly where the problem lies.
It'll be back to running as a Media Streamer now :)

Thanks again for jumping in!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!