Hello,
I'm in the process of moving VMs from one Proxmox cluster to another, using vzdump and qmrestore. The older cluster is using LVM for the VM storage and the newer cluster is using Sheepdog.
When running a qmrestore, the process does not actually load the data into sheepdog. Instead, it...
Just noting that setting "migrate_set_downtime 0.1" in the monitor tab of my VMs works for me too. Migration takes a bit longer, but it's nice to have the smaller downtime too.
Note that my VMs are Linux, most of which are using virtio drivers for NIC and hard disk.
Just noting that I am seeing this problem too. I'm using:
root@proxmox1a:~# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm...
Is there a patch for this, just for the pvestat daemon? We are not ready to do a full upgrade at this time, but if there is somewhere that we can grab the fix for just this problem that would be great.
Same problem here as well. We are up to 9.8G being used by pvestatd.
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-74
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-7-pve: 2.6.32-60
lvm2...
Dimi,
I'm not sure that I understand your question. If you read through the wiki page again you'll see that I note
The cluster should act very similar to a 3 node cluster in that it is expecting 3 votes total, but 2 votes is enough to establish a quorum. So you can shutdown any one of the...
Hello, I've added a section to the two-node cluster configuration notes concerning setting up a quorum disk backed by iSCSI. Check it out at: http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster
I would appreciate feedback if anyone attempts this. We have used this procedure...
I thought I would post the quorum disk config that we are using. Essentially, we share a small 10MB iSCSI volume from one of our backup servers as the quorum disk. The following URLs were used for reference:
[REF]...
A few more details about our configuration:
We are running a 2-node Proxmox 2.0 VE DRBD Cluster (2a and 2b). The main caveat to the wiki page describing this setup is that we run 2 DRBD volumes with a separate volume group on each. One is VG2a and the other is VG2b. We do this primarily in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.