Hello!
We have 2 node cluster configuration (cman) with iSCSI quorum drive (qdiskd).
Now there is need to make some maintenance on hardware running quorum drive (about 10-15 minutes).
How we should properly temporary shutdown quorum drive in cluster to avoid node fencing?
Corresponding...
It seems that stopped HA VM goes to "disabled" state and cannot be operated by GUI.
Try to execute
clusvmadm -d pvevm:1000 && clusvmadm -e pvevm:1000
then use GUI to migrate VM
Re: Live storage migration bug
Here it is:
#collie vdi list
Name Id Size Used Shared Creation time VDI id Copies Tag
vm-108-disk-1 0 8.0 GB 1.1 GB 0.0 MB 2013-06-20 14:35 943022 2
vm-112-disk-1 0 8.0 GB 1.2 GB 0.0 MB 2013-06-20 13:30 b58a3b...
Re: Live storage migration bug
Also I can't create second HDD on sheepdog storage:
Here I add "vm-112-disk-2", but Proxmox tries to add "vm-112-disk-1"
Hello!
I have encountered strange bug in storage migration system (or gui?)
I have VM with two HDD:
vm-500-disk-1
vm-500-disk-2
I have migrated "vm-500-disk-1" to sheepdog storage, now I click on "vm-500-disk-2" and start migration.
Here is error:
create full clone of drive virtio1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.