Seperate Management/Storage network from Guest Network

1nerdyguy

Active Member
Apr 17, 2014
119
2
38
Probably a dumb question, but I figured I'd ask anyways.

I'd like to seperate my management network, and separate my storage network, away from the LAN my VM's are using. What is the suggested method of doing this?

Currently (and stupidly, I'll admit) I have one vmbr0 with all of my nics slaved to it via LACP bonds, and it's working. But I'm wanting to seperate it out.

I'm in a 3 node cluster, so if that matters. I do have a few spare gig switches available, for physical separation, if needed.

Thanks so much!
 
Then I think you will need to fiddle with "Firewall" section on each VM. I don't know if it will apply security inter-VM or you will need to play with bridge firewalls, too.
 
Then I think you will need to fiddle with "Firewall" section on each VM. I don't know if it will apply security inter-VM or you will need to play with bridge firewalls, too.

I think we're talking about 2 different things. Let me try again.

3 Hosts, 1 storage node mounted over NFS. All currently have a 10.1.x.x /24e address.

All hosts have 4 nics, bonded using LACP, and attached to vmbr0. The cluster config and storage also are mounted through this.

I'm wanting to move the Cluster communication and storage traffic onto a different lan, preferably with a dedicated NIC, so VM traffic won't affect storage traffic, etc.

I don't care if the VM's can talk to each other, they're on the same lan anyways. I just want storage/cluster communication moved off onto it's own.
 
You separate the 4x NICs in 2+2, add the other two in vmbr1. You will need to keep vmbr0 for cluster comms (and strorage?) and then move the VMs to vmbr1.
 
You separate the 4x NICs in 2+2, add the other two in vmbr1. You will need to keep vmbr0 for cluster comms (and strorage?) and then move the VMs to vmbr1.

Well, that's easy enough.

Is there any issue having cluster comm and storage communication on the same bridge then? I could only see an issue in an HA scenario, where it's booting up a crap ton of VM's due to failover of a node, and that could saturate the nic bond.

If that's the case, I assume it's time to upgrade the nics there to 10g and be done with it.
 
I hardly think you will saturate the link bond before saturating I/O on a storm boot (you can play with order and delay on autoboot, anyway).

You'd think so, but my storage is pure SSD based, so IO has been historically pretty darn good. But I'll keep it in mind. Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!