Hello!
We have 2 node cluster configuration (cman) with iSCSI quorum drive (qdiskd).
Now there is need to make some maintenance on hardware running quorum drive (about 10-15 minutes).
How we should properly temporary shutdown quorum drive in cluster to avoid node fencing?
Corresponding...
It seems that stopped HA VM goes to "disabled" state and cannot be operated by GUI.
Try to execute
clusvmadm -d pvevm:1000 && clusvmadm -e pvevm:1000
then use GUI to migrate VM
Re: Live storage migration bug
Here it is:
#collie vdi list
Name Id Size Used Shared Creation time VDI id Copies Tag
vm-108-disk-1 0 8.0 GB 1.1 GB 0.0 MB 2013-06-20 14:35 943022 2
vm-112-disk-1 0 8.0 GB 1.2 GB 0.0 MB 2013-06-20 13:30 b58a3b...
Re: Live storage migration bug
Also I can't create second HDD on sheepdog storage:
Here I add "vm-112-disk-2", but Proxmox tries to add "vm-112-disk-1"
Hello!
I have encountered strange bug in storage migration system (or gui?)
I have VM with two HDD:
vm-500-disk-1
vm-500-disk-2
I have migrated "vm-500-disk-1" to sheepdog storage, now I click on "vm-500-disk-2" and start migration.
Here is error:
create full clone of drive virtio1...
Just edit /etc/apache2/sites-enabled/pve.conf:
Listen 192.168.1.254:8006
<VirtualHost *:8006>
Also SSH - edit /etc/ssh/sshd.config:
ListenAddress 192.168.1.254
Then restart apache and sshd:
service restart apache2
service restart sshd
I have tried to make a separate openvpn bridge for 172.16.x.x, now I have at each server two interfaces:
SRV1:
openvpnbr1 - 172.16.13.1
vmbr0 - 172.16.1.1
SRV2:
openvpnbr1 - 172.16.13.2
vmbr0 - 172.16.2.1
But now there is no connection between those ifaces:
SRV1:
#ping 172.16.13.2
From...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.