Hey guys,
So, I'm having what I thought was going to be a fairly basic issue, but I keep slamming my head into a wall, and can't get past this.
So, I've used Proxmox a fair bit with a half dozen X86 servers all configured as a single cluster before. Everything worked well, and I love the product.
Fast forward to my current deployment... so, right now I've got 3 Apple Mac Mini's (Quad Core i7) sitting in a commercial datacenter (Colocation Facility) to ultimately host several dozen public Tomcat/WebApps.
Because I need to create a large amount of Tomcat web servers with minimal resource overhead, I want to use OpenVZ for them all. At the same time, I need to host just a few Mac OS X virtual machines off the same hardware, and, since Proxmox doesn't really support OS X guests, I ended up deciding on a weird, but do-able deployment idea.
Setting up ESXi Free Hypervisor on all 3 Mac Mini's, and run a few OS X, and Windows VM's as ESXi guests, but then create a Proxmox VM on each of these ESXi servers with most of the resources allocated to them.
Everything is up and working well. No issues. Except, CLUSTERING.
I cannot get the Proxmox VM guests to successfully talk with one another (a multicast problem I think). If I move the Proxmox VM's onto a single ESXi server host (or even do the same thing in my lab with VMware Fusion), the multiple Proxmox guest VM's talk with each other perfectly, and the cluster is created, and continues to work with zero issues. However, if I have the Proxmox VM's on different ESXi hosts I can create them, and everything EXCEPT the clustering works perfectly. But I cannot create a cluster with the Proxmox VM's on different server hosts, and if I "create" the cluster all on one ESXi host, as soon as I move them back out to separate hosts, the clustering communications immediately fails.
All three ESXi host servers have IP addresses within the same subnet. I have verified with the Colo facility that I have my own VLAN, and that multicast traffic has been specifically allowed on my switch ports. The IP addresses of my Proxmox VM's are all within the same subnet.
I've tried everything I can think of. I even found instructions on how to change Proxmox to run the clustering communications with Unicast instead of multicast (https://pve.proxmox.com/wiki/Multicast_notes near the bottom), and was able to make the /etc/pve/cluster.conf config changes, and verify they showed up, and were activated in the web admin, but STILL I could not create a working cluster (it appears Proxmox wanted to use multicast anyway regardless of myUnicast settings.
I know this isn't a "normal", or ideal way to configure Proxmox, but, for my needs, the performance and reliability of Proxmox running nested as an ESXi guest is actually fantastic. I have no issues with it, at all. I just really need the central group wide administration and VM migration capabilities of having them set up as a cluster, and this seems like there should be some way to force this to work.
Anyone? Ideas? Help? PLEASE!
So, I'm having what I thought was going to be a fairly basic issue, but I keep slamming my head into a wall, and can't get past this.
So, I've used Proxmox a fair bit with a half dozen X86 servers all configured as a single cluster before. Everything worked well, and I love the product.
Fast forward to my current deployment... so, right now I've got 3 Apple Mac Mini's (Quad Core i7) sitting in a commercial datacenter (Colocation Facility) to ultimately host several dozen public Tomcat/WebApps.
Because I need to create a large amount of Tomcat web servers with minimal resource overhead, I want to use OpenVZ for them all. At the same time, I need to host just a few Mac OS X virtual machines off the same hardware, and, since Proxmox doesn't really support OS X guests, I ended up deciding on a weird, but do-able deployment idea.
Setting up ESXi Free Hypervisor on all 3 Mac Mini's, and run a few OS X, and Windows VM's as ESXi guests, but then create a Proxmox VM on each of these ESXi servers with most of the resources allocated to them.
Everything is up and working well. No issues. Except, CLUSTERING.
I cannot get the Proxmox VM guests to successfully talk with one another (a multicast problem I think). If I move the Proxmox VM's onto a single ESXi server host (or even do the same thing in my lab with VMware Fusion), the multiple Proxmox guest VM's talk with each other perfectly, and the cluster is created, and continues to work with zero issues. However, if I have the Proxmox VM's on different ESXi hosts I can create them, and everything EXCEPT the clustering works perfectly. But I cannot create a cluster with the Proxmox VM's on different server hosts, and if I "create" the cluster all on one ESXi host, as soon as I move them back out to separate hosts, the clustering communications immediately fails.
All three ESXi host servers have IP addresses within the same subnet. I have verified with the Colo facility that I have my own VLAN, and that multicast traffic has been specifically allowed on my switch ports. The IP addresses of my Proxmox VM's are all within the same subnet.
I've tried everything I can think of. I even found instructions on how to change Proxmox to run the clustering communications with Unicast instead of multicast (https://pve.proxmox.com/wiki/Multicast_notes near the bottom), and was able to make the /etc/pve/cluster.conf config changes, and verify they showed up, and were activated in the web admin, but STILL I could not create a working cluster (it appears Proxmox wanted to use multicast anyway regardless of myUnicast settings.
I know this isn't a "normal", or ideal way to configure Proxmox, but, for my needs, the performance and reliability of Proxmox running nested as an ESXi guest is actually fantastic. I have no issues with it, at all. I just really need the central group wide administration and VM migration capabilities of having them set up as a cluster, and this seems like there should be some way to force this to work.
Anyone? Ideas? Help? PLEASE!