last posts looking very interesting.
So what's the conclusion for now? Using DRBD9 with lvm.lvm driver instead of lvm_thinlv.LvmThinLv is safe?
I think, as long as Proxmox is telling that using DRBD 9 is "technology preview" and Linbit is saying on DRBD9 FAQ page:
"It is a...
any update here? We are also encountering this problem and I think it should be easy for proxmox guys to reproduce this problem in a lab.
I think for the moment it would be best to use drbd the "old-way" DRBD + LVM Storage (no thin prov)
we've probably faced a bug.
in a cluster setup with shared storage ( clvm over iscsi), the newly created lv isn't deactived after auto migration:
you can reproduce this the following way:
lets say you have a kvm template on node1 and do a "clone to node2", the new vm is firstly cloned on...
there is just one little question: /cluster/resources uses GET
so if i use $PVE->get i can not pass argument like type = vm....
on pvesh it works fine but how to use it in api?? i've also tried POST but does not work.
how to give the parameter type via api?
is it possible to get the actual node in a cluster, on which a vm is running, if I only know the vmid? so for example something like this:
and it will returns the node name on which the vm is running on?
this would be really helpfull...
we've setup a kvm system and defined it as template. Now we want to create a Clone of this template via api, but i can not find anything related to this.
I've also run in pvesh
help /nodes/prox/qemu --verbose
but did not find any suitable option.
pveversion -vproxmox-ve-2.6.32: 3.1-109...
thanks, i will go into this deeper, but i do not think that this is the problem, 'cause timeout comes when pve is trying to lock the vm and not when he writes backupdata (it does not come to this process)...
i still running into this problem every night..
i also found an entry in the logfile saying:
"WARNING: unable to connect to VM 100 socket - timeout after 31 retries."
so it seems, the nfs isn't the problem.
1 second later i retried to start the job and it works...
how can i debug more...
this did not really help. now there is no error in the tasks, but backup is just not created. i switched back to soft and tcp.
i do not think that there is any network problem, cause the connection is a direct 1gbit connection. it also works on other proxmox nodes without problems.
i've also ran into this problem. i've got 4 vm's and backing up 2 vm's per day.
sometimes one of the vm's are not backuped with error message 'got timeout'.
Backupspace is mounted via nfs over 1GBit Link.
NFS mount options...
in proxmox 2.2 there was a command cpu_set
in pve 2.3 i didn't find them anymore.
however, the command was not working but will there be the possibility to hotplug cpu(cores) w/o rebooting in the near future?