I was able to use all the local space by building a new 5.2 Proxmox server with the same shared storage as the old 4.4 server. Then I backed up guests on the old server to shared storage. Then I powered off the guest and restored from backup on the new server. Once I verified the guests...
I suppose another option would be to move all guests to node B and move them to shared NFS storage, remove node A from cluster, rebuild node A as standalone server with same shared storage as old cluster, power down guests on node B, power up guests on node A server, then poweroff and rebuild...
OK, that makes sense. Would it be possible to move all guests to node B, remove that node from the cluster, build new node A and new cluster, and then add node B to new cluster? If so, then node A should retain local-lvm, right?
Thanks Dietmar! I did not see local-lvm listed as a storage option on the host. Only saw local (100GB) and some NFS shared storage that I use. I am going to rebuild the host again and check for local-lvm before I add it to the old cluster. I will let you know what I discover.
Thanks,
Dale
Hello,
I have a 2 node/2 socket subscription cluster that I am moving from Proxmox4.4 to Proxmox5.2 as 4.4 is EOL Each node has a 4TB disk and I have some shared NFS storage. I moved all VMs to node 2, removed node 1 from cluster and installed 5.2 on node 1. However, the local storage of...
I have a test Proxmox environment, and was able to get this to work there. I basically deleted the vmbr11 and also the bond0.11, restarted, and then added vmbr11 back with ports/slaves bond0.11 and rebooted, and all works well without a bridge IP address. Guest VM's were able to attach to the...
We have a Proxmox cluster where each node has a pair of physical nics bonded (LACP) to bond0 and then each vlan as (vlan 10-15, as bond0.10, bond0.11, etc) and each vmbr as (vmbr10 with ports/slaves bond0.10, vmbr11 with ports/slaves bond0.11, etc)
We have an IP/mask/gateway associated with...
We lost power to both nodes, and now they will not reestablish the cluster. Both nodes can ping each other, but neither will start guest VMs because there is no quorum. How can we troubleshoot this issue to bring the cluster back online. We do have a subscription for both nodes.
Thanks,
Dale...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.