Though I did search the forum for answers I'd not seen your article, I will study this in detail.
I'd been expecting SSH issues as I'd completely replaced the main servers with all-new installs but retaining their existing names and IP addresses, so wasn't particularly surprised but couldn't...
Outline:
I'm wondering if there's any detailed documentation anywhere about exactly how the various SSH key files are used within a Proxmox cluster?
In particular, what is the relationship between /root/.ssh/known_hosts, /etc/ssh/ssh_known_hosts, and /etc/pve/priv/known_hosts, and if replacing...
Fair enough, appreciate the reply and having reviewed your link I see what you mean, effectively a potential single point of failure.
Haven't made any changes yet, not sure if removing the Qdevice is liable to cause a cluster reboot or not so waiting till I'm next on the actual premises so can...
I'm seeing the same thing, rebooting one node causes all nodes to reboot.
We have a Proxmox cluster of three nodes plus a Qdevice. Originally it was a two node cluster, the two nodes being large and powerful servers with 120 cores and 300gig+ of memory each. In order to be able to use HA we...
Assuming a setup where there is a separate boot disk or disks, and then ZFS storage pools for the actual VMs to live on, how much traffic, on a general basis, and especially write traffic, should be going to the root disk of a Proxmox server?
We have two large, multi-core hypervisors with over...
Possibly related to this thread and may be of use to someone:
I had a cluster of 3 nodes, two running 7.0, one running 7.3 (upgraded at different times). We wanted to upgrade all 3 to 7.4-16 preparatory to moving to v8. There was a major VM on the 7.3 node we didn't want to take risks with...
Thanks for replying, based on that I went ahead and did as proposed, and it worked perfectly. I took a backup of storage.cfg as well but it proved not to be needed as both main nodes have identical layouts, with zfs pool /tank1 as the main storage for VMs. Shut down the VM, copied then removed...
The question: I see from the documentation at https://pve.proxmox.com/wiki/Cluster_Manager that "All existing configuration in /etc/pve is overwritten when joining a cluster. In particular, a joining node cannot hold any guests, since guest IDs could otherwise conflict, and the node will inherit...
Trying to follow the upgrade process, and the pve7to8 check script, even after a restart, throws this error: "proxmox-ve package is too old, please upgrade to >= 7.4-1!"
Which doesn't make a great deal of sense when:
root@covenstead:~# pveversion
pve-manager/7.4-15/a5d2a31e (running kernel...
After posting the query I decided to do a test, there's an old dev VM, shut down, only 32 gig, on one of the hosts, so I set that up to replicate, and saw that the data was copied across and the disk file then existed on both hosts.
I let the process fully complete, log showing all finished...
I have two large Proxmox 7.0-11 hosts in a cluster, with 50TB ZFS local storage each, and most VMs replicating between them.
On one of these hosts are two large (5TB) VMs, both shut down and out of use, which were nested hypervisors, one Xen, one VMware. One is VM replicated to the other host...
Just wanted to note this answer has solved a problem I was having, where the one machine in a 3 node cluster that doesn't have a /tank2 was showing tank2, but unknown status, which in turn was causing any attempt to migrate VMs off that node to fail with "zfs error: cannot open 'tank2': no such...
Aha, wasn't aware of restricted groups. Googled and read the docs, and have now set up a restricted group for just the two main machines, which all the VMs on them will now have their HA set to. For the time being HA is set to "ignore" on all VMs while we're dealing with the errant switch, but...
Thanks for that, clear and concise.
Having checked in tank1 and tank2 on the main machines that the disk files were still present, I moved the remaining VMs back to the proper hosts by the following process:
1. In a terminal window on the the "wrong" host:
cd /etc/pve/qemu-server; cat xxx.conf...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.