Hi
here is what we do:
we're running quagga (specifically ospfd) on the proxmox-hardware-node this gives us full dynamic routingtables in case eg. one VE is moved to another proxmox-host (no cluster-membership necessary, IPs can be assigned without any subnet-relationship to the interfaces on the hardware-node) - all this is true for openvz based VEs.
with kvm-based VEs we have things solved for us to assign a specific vmbr-interface to each VE and configure an IP-subnet onto this - the VE can then use this IP configuration to reach towards the internet. The downside here is that we can't assign a relationship of vmbr's to VEs - which would be really nice (hint!) and enable one to roam VEs without any hassle between hosts (without fancy clustering true, and yes it can take a while but most of the time this is quite enough).
Now back to my problem: when we modify the /etc/network/interfaces to include some metric value on vmbr0 (where proxmox deploys itself as default-interface) in order to get this default-gw setting into some sort of last-resort-gateway - proxmox removes this during the next reboot (the metric setting and yes sometimes only) and we're back at having to reconfigure this manually.
Yes, we could set no default-gw and use ospf only, but having some last-resort in case of emergency (quagga may fail too) is a nice thing and we'd simply assign it some metric-value to usually not be used and would be fine here.
Finally - if - proxmox would decide to integrate quagga into its package-system this would be highly appreciated one direct benefit would probably be no more reboots for network-reconfiguration and shiny dynamic routing within a cluster without trouble.
Only - please keep in mind that individual configuration-modifications especially on the routing-engine might be needed - for example we filter the private IPs assigned to nfs-backup-ethernet-interfaces to not poison the public routing-domain in our network.
Basically I think a quick-fix could be some option to tell proxmox to handle a metric value for interfaces too. (please!)
Thoughts and ideasy highly appreciated, we're running several hundred VEs on a bit fewer hardware-nodes (not all proxmox based though, in fact I can't think of a hypervisor we have not running on at least one serversystem here).
Regards
hk
here is what we do:
we're running quagga (specifically ospfd) on the proxmox-hardware-node this gives us full dynamic routingtables in case eg. one VE is moved to another proxmox-host (no cluster-membership necessary, IPs can be assigned without any subnet-relationship to the interfaces on the hardware-node) - all this is true for openvz based VEs.
with kvm-based VEs we have things solved for us to assign a specific vmbr-interface to each VE and configure an IP-subnet onto this - the VE can then use this IP configuration to reach towards the internet. The downside here is that we can't assign a relationship of vmbr's to VEs - which would be really nice (hint!) and enable one to roam VEs without any hassle between hosts (without fancy clustering true, and yes it can take a while but most of the time this is quite enough).
Now back to my problem: when we modify the /etc/network/interfaces to include some metric value on vmbr0 (where proxmox deploys itself as default-interface) in order to get this default-gw setting into some sort of last-resort-gateway - proxmox removes this during the next reboot (the metric setting and yes sometimes only) and we're back at having to reconfigure this manually.
Yes, we could set no default-gw and use ospf only, but having some last-resort in case of emergency (quagga may fail too) is a nice thing and we'd simply assign it some metric-value to usually not be used and would be fine here.
Finally - if - proxmox would decide to integrate quagga into its package-system this would be highly appreciated one direct benefit would probably be no more reboots for network-reconfiguration and shiny dynamic routing within a cluster without trouble.
Only - please keep in mind that individual configuration-modifications especially on the routing-engine might be needed - for example we filter the private IPs assigned to nfs-backup-ethernet-interfaces to not poison the public routing-domain in our network.
Basically I think a quick-fix could be some option to tell proxmox to handle a metric value for interfaces too. (please!)
Thoughts and ideasy highly appreciated, we're running several hundred VEs on a bit fewer hardware-nodes (not all proxmox based though, in fact I can't think of a hypervisor we have not running on at least one serversystem here).
Regards
hk