[SOLVED] Remove GPU after installation

Moz

New Member
Dec 15, 2022
13
0
1
Hello guys :)
I'm kinda new on proxmox & virt. I have at home 7 nodes for my cluster, but I have a major problem, GPU.
I'm familliar with classic linux to install with gpu and then remove it without any probleme. However, on proxmox, if I do the same, proxmox will not boot.


I saw a lot of stuff on the forum and reddit, but well, I prefer ask before do something stupid that i don't really understand.

If I remove the gpu, the network interface is apparently changing, the question is then


How I do prevent that ? or How I do fix that ?

There is this Tuto wich speak about headless installation.

Does the gpu passthought is about this too ? I don't think so, but again, I wanna learn what's going on and fix that.

Regards

Moz
 
If your nodes have serial ports (COM1:), I would recommend setting those up for logins so you can login without a GPU and see what is happening.

Obviously if the GPU card you are removing is not designed to be "hot-swap" and you pull it out with the system running, the system will crash ! :)
 
They do not sadly, they are not server, more new brand gamer computer recycled into server...
Obviously not haha, I'm speaking about remove it properly during a Stop & Start.

I'm gonna try what you said into your tuto, will let you know.
 
Hi,

to reiterate: When you remove the GPU after the installation, the network interface name changes, since the PCI enumeration changes.

What you could try is to assigned a fixed name based on the MAC address on boot by using systemd-link:
Create /etc/systemd/network/10-fixed.link (the correct filename is important here!) with the following contents:
Code:
[Match]
MACAddress=10:20:30:40:50:60

[Link]
Name=eth0
.. replacing 10:20:30:40:50:60 with the actual MAC address of the network interface in question.

After a reboot, the interface should now always be called eth0, configure as desired.
Now it should hopefully always be called eth0, regardless of whether the GPU is plugged in or not.
 
  • Like
Reactions: LongDono
Hi,

to reiterate: When you remove the GPU after the installation, the network interface name changes, since the PCI enumeration changes.

What you could try is to assigned a fixed name based on the MAC address on boot by using systemd-link:
Create /etc/systemd/network/10-fixed.link (the correct filename is important here!) with the following contents:
Code:
[Match]
MACAddress=10:20:30:40:50:60

[Link]
Name=eth0
.. replacing 10:20:30:40:50:60 with the actual MAC address of the network interface in question.

After a reboot, the interface should now always be called eth0, configure as desired.
Now it should hopefully always be called eth0, regardless of whether the GPU is plugged in or not.
Hi,

I tried several thing, no one bring light on... When you said
the correct filename is important here!)
You mean only the digit for precedence right ?

Code:
# /etc/udev/rules.d/70-persistent-net.rules
[Match]
MACAddress=2c:f0:5d:dd:29:40

[Link]
Name=vmbr0

and
Code:
# /etc/udev/rules.d/70-persistent-net.rules

# interface with MAC address "2c:f0:5d:dd:29:40" will be assigned "vmbr0"
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="2c:f0:5d:dd:29:40", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="vmbr*", NAME="vmbr0"

That was the result of ip addr before
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 2c:f0:5d:dd:29:40 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
    inet 192.168.1.42/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 2a01:e0a:b73:9410:2ef0:5dff:fedd:2940/64 scope global dynamic mngtmpaddr
       valid_lft 86061sec preferred_lft 86061sec
    inet6 fe80::2ef0:5dff:fedd:2940/64 scope link
       valid_lft forever preferred_lft forever
4: fwbr112i0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether ba:a0:2c:53:79:61 brd ff:ff:ff:ff:ff:ff

Now I get this new eno1

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 2c:f0:5d:dd:29:40 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2c:f0:5d:dd:29:40 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.42/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::2ef0:5dff:fedd:2940/64 scope link
       valid_lft forever preferred_lft forever
4: tap112i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr112i0 state UNKNOWN group default qlen 1000
    link/ether 66:36:a8:73:69:86 brd ff:ff:ff:ff:ff:ff
5: fwbr112i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:a0:2c:53:79:61 brd ff:ff:ff:ff:ff:ff
6: fwpr112p0@fwln112i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 66:18:33:07:b3:91 brd ff:ff:ff:ff:ff:ff
7: fwln112i0@fwpr112p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr112i0 state UP group default qlen 1000
    link/ether 1a:a2:79:ef:3a:99 brd ff:ff:ff:ff:ff:ff
Anything of what I tried worked as well. But after spending times on reading thing about proxmox and interfaces, I'm guessing that I'm not doing the correct things, vmbr0 should maybe not be change or assigned


I did the same test with eno1 without success
I did another test to fix the previous eno1 on eno0, the reboot was okay (with the gpu) on eno0, but after the gpu is removed, no ssh possible.

The last test was to create a new name for the interface and bind it.
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: cyril: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 2c:f0:5d:dd:29:40 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2c:f0:5d:dd:29:40 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.42/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::2ef0:5dff:fedd:2940/64 scope link
       valid_lft forever preferred_lft forever
4: tap112i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr112i0 state UNKNOWN group default qlen 1000
    link/ether 66:36:a8:73:69:86 brd ff:ff:ff:ff:ff:ff
5: fwbr112i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:a0:2c:53:79:61 brd ff:ff:ff:ff:ff:ff
6: fwpr112p0@fwln112i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 66:18:33:07:b3:91 brd ff:ff:ff:ff:ff:ff
7: fwln112i0@fwpr112p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr112i0 state UP group default qlen 1000
    link/ether 1a:a2:79:ef:3a:99 brd ff:ff:ff:ff:ff:ff

with interface file


Code:
auto lo
iface lo inet loopback

iface cyril inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.42/24
        gateway 192.168.1.254
        bridge-ports cyril
        bridge-stp off
        bridge-fd 0

that does not work too. So I'm suspecting that the mac adress change.
 
Last edited:
Work fine on another machines by renaming interface & adding link...
 
# /etc/udev/rules.d/70-persistent-net.rules
[Match]
MACAddress=2c:f0:5d:dd:29:40

[Link]
Name=vmbr0

First of, what I suggested in my post are not udev rules, but a systemd-specific config. And where did you get the second udev rule from?

So yeah, that won't work and the filename is indeed important. That's why I explicitly mentioned the path /etc/systemd/network/10-fixed.link. (And to iterate on that: The directory (/etc/systemd/network and the file extension (.link) is important.)

Additionally, vmbr0 is a virtual bridge interface created & needed by PVE to attach VM and CT networks to.
So you should not name your interface like this, that obviously won't work. You can use e.g. wan0, eth0 (or pin it on eno0 since thats the driver defined name), just not vmbr*.

Hope this helps!
 
I did a mistake when copied the path, it's at the good place and working fine :)
The problem is that my test proxmox server is not able to boot without gpu (nothing related to proxmox).

what I exactly did was

1: Rename the interface
Code:
root@mox07:~# cat /etc/network/interfaces

auto lo
iface lo inet loopback

face custom_name inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.132/24
        gateway 192.168.1.254
        bridge-ports custom_name
        bridge-stp off
        bridge-fd 0

Then create the link

Code:
root@mox07:~# cat /etc/systemd/network/10-fixed.link
[Match]
MACAddress=1c:69:7a:4b:41:c8

[Link]
Name=custom_name

I made a little ansible role to handle this automatically if someone need

https://github.com/LaFermeDuMineur/deploy-mox/compare/master...tasks/add_new_mox
 
I did a mistake when copied the path, it's at the good place and working fine :)
Glad it works now!

Please just mark the thread as [SOLVED] (you can do this by editing the first post), to help others with the same problem to find it more easily in the future :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!