While reading up on the cluster manager settings here
"By default, Proxmox VE uses the network in which cluster communication takes place to send the migration traffic. This is not optimal both because sensitive cluster traffic can be disrupted and this network may not have the best bandwidth available on the node."
I'm now curious if I need to set a dedicated migration network. I ask as I have had zero issues with the cluster, or migrating instances, it has run rock solid since spinning it up. VM migrations that are on the shared storage complete in about 18 seconds, with 8GB average memory, VM's on local storage take a bit longer. I have most traffic set to go over a 10G bond that terminate to trunk ports on a 10G managed switch, and another separate (1Gb) interface only for handling corosync traffic as the primary (Link0).
I have the cluster setup with redundant links, one going over the trunk (bond0) interface (link1), and another over the separate (enp3s0) interface (link0). Do I need to bother with setting the migration network?
I have a 2 node cluster (running 8.3.3) hosting about three dozen instances of production cloud workloads and monitoring hosts. Everything runs fine, so not sure this is really needed in my use case. I have monitored the switch traffic stats while doing online migrations, and I see transfer speeds often above 5Gb indicating the migration traffic is moving over the bond interfaces already. - Not sure this is something I should worry about, but don't want to get bit later down the road. - Let me know if others have found a solid reason to set the migration network, or if this is an old setting no longer relevant perhaps?
TIA
"By default, Proxmox VE uses the network in which cluster communication takes place to send the migration traffic. This is not optimal both because sensitive cluster traffic can be disrupted and this network may not have the best bandwidth available on the node."
I'm now curious if I need to set a dedicated migration network. I ask as I have had zero issues with the cluster, or migrating instances, it has run rock solid since spinning it up. VM migrations that are on the shared storage complete in about 18 seconds, with 8GB average memory, VM's on local storage take a bit longer. I have most traffic set to go over a 10G bond that terminate to trunk ports on a 10G managed switch, and another separate (1Gb) interface only for handling corosync traffic as the primary (Link0).
I have the cluster setup with redundant links, one going over the trunk (bond0) interface (link1), and another over the separate (enp3s0) interface (link0). Do I need to bother with setting the migration network?
I have a 2 node cluster (running 8.3.3) hosting about three dozen instances of production cloud workloads and monitoring hosts. Everything runs fine, so not sure this is really needed in my use case. I have monitored the switch traffic stats while doing online migrations, and I see transfer speeds often above 5Gb indicating the migration traffic is moving over the bond interfaces already. - Not sure this is something I should worry about, but don't want to get bit later down the road. - Let me know if others have found a solid reason to set the migration network, or if this is an old setting no longer relevant perhaps?
TIA