[SOLVED] Network down after VM autostart enabled

MrHans

New Member
Oct 29, 2021
4
2
3
37
Hello,

I have a fresh install of Proxmox 7.0 with a single Linux VM created in it. I have so far started Proxmox, logged in to the web UI and launched the VM with a GPU PCI pass-through and it worked fine.

Last night before turning off the system, I set the VM to autostart on boot in the web UI. Now when starting the system, the VM does boot up, however I have no networking. Neither the host Proxmox, nor the VM (obviously?) get an IP.

It seems to me that the issue comes from the VM autostart, so I would like to disable it, but with the VM booting right after Proxmox starts and not being able to access the web UI either (no networking), I do not know how. Is there a way to interrupt the VM autostart during Proxmox boot? Is there anything else I can try to get out of this loop?

Thanks!


(Note that networking was working fine before I enabled VM autostart on boot, hence the config is correct.)
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,483
1,397
164
you can disable/mask the pve-guests service via systemctl (e.g., by booting into the systemd rescue shell, or by booting a live-cd and chrooting into your PVE installation)
 

itNGO

Active Member
Jun 12, 2020
466
95
33
44
Germany
it-ngo.com
Hi,
login to Proxmox console and type:
qm set $VMID --onboot 0

this should disable Autostart.
 

MrHans

New Member
Oct 29, 2021
4
2
3
37
Hi,

thanks for the replies.

Apologies if I have not been clear, my question is predominantly, since the VM starts directly on boot, I do not get a CLI where I could disable the VM starting or do anything proxmox-related in fact. Short: When the system boots, I arrive directly in the Linux VM (with GPU, keyboard, mouse passed through straight away) and can't have access to Proxmox config.

So when starting the physical machine, how do I get into a Proxmox console? How do I interrupt the boot process and "just get a CLI"?
 

itNGO

Active Member
Jun 12, 2020
466
95
33
44
Germany
it-ngo.com
Hi,
ok I missed that... so its the way fabian described....
by booting into the systemd rescue shell, or by booting a live-cd and chrooting into your PVE installation
There you can change the config file and disable Autostart.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,483
1,397
164
disabling the onboot flag in the config file is a bit more involved, as in such a rescue environment the pve cluster filesystem where the config file is stored is not mounted. but disabling and masking the service should work, then you can remove the onboot flag, and then unmask and re-enable the service
 

MrHans

New Member
Oct 29, 2021
4
2
3
37
I still have the installation media, I will use that to boot and go from there.
Thank you for the pointers, I will report back once I made some progress.
 
  • Like
Reactions: fabian

MrHans

New Member
Oct 29, 2021
4
2
3
37
Just following up on this topic for the sake of completeness and anyone else having a similar issue.

tl;dr
The network interface in Proxmox magically changed from enp8s0 to enp7s0. With auto starting a VM using GPU pass-through and not having network access to the Proxmox GUI or the CLI, this created a blockade. Once getting into a CLI, renaming the network interface resolved the networking issue.


Step 1 - Get a CLI
Ok, so trying to get something done in rescue mode or using the installation media debug mode was more tricky than I thought and I came across a more "elegant" (that is simple!) solution here: https://forum.proxmox.com/threads/is-there-a-way-to-disable-the-automatic-start-of-vms-before-proxmox-boots.83636/post-367826 that is simply to disable virtualization in the BIOS and force Proxmox to just throw a CLI after failing to start the VM.

This worked brilliantly and I got to a CLI.

Step 2 - Find the actual problem
Following on thinking this is a VM auto start issue - that is when the issue appeared after all - I went ahead and figured out the VM ID using
Bash:
pvesh get /cluster/resources --type vm
and then disabling auto start with
Bash:
qm set 100 --onboot 0

After a reboot, networking was still not working, despite a textbook-looking /etc/network/interfaces setting.
However, finally when looking at
Bash:
systemctl status networking
the line error: vmbr0: bridge port enp8s0 does not exist gave a clue and I checked
Bash:
ip link
which revealed that the current ethernet interface was in fact enp7s0 and not enp8s0, as currently in /etc/network/interfaces. I have no clue how this change came to be.

Step 3 - Solution
Editing /etc/network/interfaces accordingly and
Bash:
systemcl restart networking
did the job and the network came up right away.

I directly turned the VM auto start back and also re-enabled virtualization in the BIOS to test it together. Everything works fine now.


For the moment the issue is resolved and I will keep an eye out for the future. In the meanwhile, however, if anyone has an idea why or how the network interface name can change (from my perspective, on a literally one day old Proxmox installation with just one VM and no other changes and tinkering) I would be happy to learn to avoid issues in the future.
 
  • Like
Reactions: itNGO

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,483
1,397
164
you mentioned GPU pass-through - one way that NIC naming can change is if the PCI topology changes, so likely NIC naming and the pass-through somehow raced with the VM starting on boot?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!