I have to withdraw my statement about "only one VM per node".
A few seconds ago another VMs replication broke with the same error on a node where allready another VMs replication was broken.
So it seems that with time passing all replications will get in failed state :-(
I got now the same problem on a different node with another VM. :-(
And there seems to be no "permanent" or even "semi permanent" solution for it.
If I now remove the replication and also the zfs volume form the backup server, and then recreate the replication, it will work for a day, and then...
@wolfgang
Any idea how to solve the "recurring" problem?
Here my log:
2017-11-20 09:14:00 506-0: start replication job
2017-11-20 09:14:00 506-0: guest => VM 506, running => 6157
2017-11-20 09:14:00 506-0: volumes => local-zfs:vm-506-disk-1
2017-11-20 09:14:01 506-0: create snapshot...
Sure, in my opinion the Proxmox guys did a fantastic job!
The fact is that the new replication feature is very nice and I would love to continue to use it. Unfortunately it seems to be a bit buggy.
I did checked now the first time pve-zsync and it seems to be similar to replication feature. I...
I do have the same problems with replication. But I do not use no HA, so I actually don't really understand why it happans that the replication job gets faulty.
After this happans I tried to remove the replication, and also destroy the ZFS image on the replication server, as mentioned above...
After having this troubles I tried to use ZFS istead of thin-lvm. And these problems do not occour with ZFS.
With ZFS all the movement, restore, etc. tasks from ZFS to NFS and vice versa work incredibly fast (10 times faster then with thin-lvm volumes).
So my way to go now with Proxmox 5.1 is...
Hello @all
I am actually testing a cluster environment where the nodes have 64GB of RAM and I am allocating arround 64GB to KVMs. Now ZFS configured to use 8GB, of course there is not enough RAM for all KVMs.
This is where KSM should come in to play.
KSM is activated the ksmtuned works and...
Hello,
I am actually testing the new replication freature and have the problem, that the job ist hanging.
I have tried to replicate a very small VM that has 2GB disk. This is running now for an hour an nothing seems to happan. Here the log:
2017-11-01 10:35:01 102-0: start replication job...
I did tried now to use ionice to lower the IO prio but still having the same problems that running KVMs on the node get stuck, close connections, etc.
ionice -c3 qm move_disk 120 scsi0 vmdata
Hi, I am not using KSM. So I do not "overuse" the available RAM (because of performance issues with KSM).
My System is showing that I do have arround 20GB of free RAM (the node has 64GB RAM installed), so this is the reason why I don't understand my 6GB SWAP partition beeing used 100% without a...
Hello@all,
I have recognized that on some of my new PVE5.0 nodes the SWAP is used by arround 99% even if there is still arround 30% of RAM free. Why does this happan?
Any ideas?
Thx
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.