last posts looking very interesting.
So what's the conclusion for now? Using DRBD9 with lvm.lvm driver instead of lvm_thinlv.LvmThinLv is safe?
I think, as long as Proxmox is telling that using DRBD 9 is "technology preview" and Linbit is saying on DRBD9 FAQ page:
"It is a...
any update here? We are also encountering this problem and I think it should be easy for proxmox guys to reproduce this problem in a lab.
I think for the moment it would be best to use drbd the "old-way" DRBD + LVM Storage (no thin prov)
we've probably faced a bug.
in a cluster setup with shared storage ( clvm over iscsi), the newly created lv isn't deactived after auto migration:
you can reproduce this the following way:
lets say you have a kvm template on node1 and do a "clone to node2", the new vm is firstly cloned on...
there is just one little question: /cluster/resources uses GET
so if i use $PVE->get i can not pass argument like type = vm....
on pvesh it works fine but how to use it in api?? i've also tried POST but does not work.
how to give the parameter type via api?
is it possible to get the actual node in a cluster, on which a vm is running, if I only know the vmid? so for example something like this:
and it will returns the node name on which the vm is running on?
this would be really helpfull...
we've setup a kvm system and defined it as template. Now we want to create a Clone of this template via api, but i can not find anything related to this.
I've also run in pvesh
help /nodes/prox/qemu --verbose
but did not find any suitable option.
pveversion -vproxmox-ve-2.6.32: 3.1-109...
thanks, i will go into this deeper, but i do not think that this is the problem, 'cause timeout comes when pve is trying to lock the vm and not when he writes backupdata (it does not come to this process)...