renaming bridge vmbr0

mokaz

Member
Nov 30, 2021
83
17
13
Hi there all,

On a cloud based root server, I'm equipped there with a single physical NIC (that's all right =).
Obviously the PVE setup procedure landed me with something like this:
Code:
iface eno1 inet manual
#NIC_0_WAN

auto vmbr0
iface vmbr0 inet static
        address x.x.x.x/x
        gateway x.x.x.x
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#BRI_WAN

Now, my goal would be to shift the logical identity of that bridge to say something like this:

Code:
iface eno1 inet manual
#NIC_0_WAN

auto vmbr99
iface vmbr99 inet static
        address x.x.x.x/x
        gateway x.x.x.x
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#BRI_WAN

After having made some internal network testings, it seems that renaming a bridge won't necessarily take all the hooked VMs along.
Hence basically, if I'm renaming that bridge to the wanted Bridge ID, aside of re-hosting the needed VMs within the new Bridge ID, I won't loose management connectivity through the physical NIC "eno1" correct?

Let me know,
Kind regards,
m.

EDIT1: Ahh as well, is there anything mandatory within the bridge naming convention? I.E. do I absolutely need to have a vmbr0 being present ?
 
Last edited:
EDIT1: Ahh as well, is there anything mandatory within the bridge naming convention? I.E. do I absolutely need to have a vmbr0 being present ?
No, vmbr0 isn't needed. But I dont get why you need to rename that bride at all. Why not just keep vmbr0 for your WAN and then create vmbr1, vmbr99 or whatever for a DMZ or other stuff.
Your vmbr0 is also VLAN aware so if you just want to work with several VLANs you don't need to create another bridge for that as each vlan can use the same vmbr0, because the bridge can forward tagged traffic.
 
Last edited:
Hi Dunuin,

Thanks for your feedback ! The why is simple in my view; whenever you bulk create a VM it's automatically mapped on the 1st available bridge. I mean if you bulk ad 10 interfaces to the new VM for example, they're all mapped on the 1st bridge. Which in my case can lead to unwanted situations if you're not careful enough.

And I'm usually used to have anything WAN related as my last numbered network, that i don't know why =)

Thanks, I'll post my feedback's once I've done the move.

Cheers,
m.
 
I have my nodes setup that vmbr0 is the interface used by VMs, they are then divided using a VLAN tag (default untagged VLAN is the testing and development network) on the particular interface assigned to a VM and then my storage network is vmbr1 and my management is assigned to vmbr2.
 
Hi there all,

So I've made the shift and it's all fine, got now my WAN bridge on the logical identity I wanted.
I needed a server hard reset though post systemctl restart networking.service which I couldn't explain.
Post reboot all up again where I wanted things to be and as planned.

Cheers & thanks,
m.
 
I'd say for future troubleshooters that renaming of bridge device can bring you trouble with migrating VMs.

I decided to migrate all my VMs from one laptop to another. Added second to the cluster and do migration, but job fails with no details (which is extremely stupid):

Code:
2023-04-17 14:20:19 starting migration of VM 103 to node 'vaio' (192.168.1.212)
2023-04-17 14:20:19 found local disk 'local-lvm:vm-103-disk-1' (in current VM config)
2023-04-17 14:20:19 found local disk 'local-lvm:vm-103-disk-4' (in current VM config)
2023-04-17 14:20:19 starting VM 103 on remote node 'vaio'
2023-04-17 14:20:22 [vaio] bridge 'vmbr1' does not exist
2023-04-17 14:20:22 [vaio] kvm: -netdev type=tap,id=net0,ifname=tap103i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 512
2023-04-17 14:20:22 [vaio] start failed: QEMU exited with code 1
2023-04-17 14:20:22 ERROR: online migrate failure - remote command failed with exit code 255
2023-04-17 14:20:22 aborting phase 2 - cleanup resources
2023-04-17 14:20:22 migrate_cancel
2023-04-17 14:20:23 ERROR: migration finished with problems (duration 00:00:04)
TASK ERROR: migration problems

Only after an hour or so I noticed in web gui other failed jobs that says:
Code:
  Logical volume "vm-103-disk-2" created.
  Logical volume "vm-103-disk-3" created.
QEMU: [B]bridge 'vmbr1' does not exist[/B]
QEMU: kvm: -netdev type=tap,id=net0,ifname=tap103i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 512

TASK ERROR: start failed: QEMU exited with code 1
oh my god, of course there is no bridge with the same name! why this stupid script expect the same name for bridge device on another machine?
 
what do you mean with no details? the migration log contains the error that tells you that the bridge does not exist:

Code:
2023-04-17 14:20:19 starting VM 103 on remote node 'vaio'
2023-04-17 14:20:22 [vaio] bridge 'vmbr1' does not exist
2023-04-17 14:20:22 [vaio] kvm: -netdev type=tap,id=net0,ifname=tap103i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on: network script /var/lib/qemu-server/pve-bridge failed with status 512
2023-04-17 14:20:22 [vaio] start failed: QEMU exited with code 1
2023-04-17 14:20:22 ERROR: online migrate failure - remote command failed with exit code 255

and that the migration was not successful.

the bridge is part of the VM config, and the expectation is that bridges exist on all nodes with the same meaning (within a cluster).
 
but that would mean starting the VM without any network at all, since we don't know which of the "other" bridges we should choose (e.g., the vmbr0 in the original config might be one network, and the bridge we choose randomly on the target node might be the management vlan). this is just how it works, if you configure your nodes in an incompatible fashion there's nothing we can do (except fail).
 
I understand POV of the script - that everything should be the same for both nodes - src and dst. But my usage - home usage, so I have only one network, two laptops and failing seems too much for me. Ideally - it would be nice to have easy mode - check if some HW settings \ devices doesn't exist like network device or usb and then ignore it or have an option to change during the startup of migration.

Most important thing during migration - not the fact that the VM 100% is available, but just easy way to copy data from one pc to another. Even if it will not be accessible - I'll manage it later, but at least my data will be already transferred.
But I guess that my case might be rare.
 
yes, that is definitely not the target use case of PVE (clusters).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!