Of course wolfgang,
There are no zfs processes on all nodes, sender and receiver.
The replication tab still does not work and hangs the pve-daemon process.
I have to restart it now.
Hi wolfgang.
zfs 0.6.5.11-pve17~bpo90
uname -a
Linux pvez30 4.10.17-2-pve #1 SMP PVE 4.10.17-20 (Mon, 14 Aug 2017 11:23:37 +0200) x86_64 GNU/Linux
I did not touch anythink, it just stopped
Now I did disable pvesr, waiting to do more tests.
Thank you
I had same issue, pve-daemon did hang because of pvesr malfunction. I saw one or more processes pve-daemons running at 100% cpu, solved with systemctel restart pve-daemon.
You should check the system date too.
Same for me, I have a 4 node cluster, PVE 5.0 running fine for over a month, yesterday at 21:50 replication stopped.
Last logs on zfs receiver:
2017-10-28.20:06:23 zfs recv -F -- rpool/data/vm-102-disk-1
2017-10-28.20:06:28 zfs destroy rpool/data/vm-102-disk-1@__replicate_102-0_1509188701__...
Hello,
I'm planning to upgrade from 5.0 to 5.1 but I read of the blue screen problem on windows kvm guest.
Are windows kvm working for someone with PVE 5.1?
If yes, please post your cpu model.
Surely I will do some test too and will share it.
Thank you
Yes, if I did understand well, the problem was on CPUs with no virtual nmi support...
Can you check with command:
cat /proc/cpuinfo | grep nmi
you should get no output if your CPU doesn't support virtual nmi
Just for curiosity.
Thank you
The backup doesn't take snapshot at storage level, it uses a kvm function to freeze the VM and intercept writes during backup.
I don't use backups because I also use pve-zsync nightly to a remote storage, having snapshots for the last x days.
Hi,
I started a restore on node 1 with new VM ID 101
Then I started another restore on node 2 and the first free VM ID was 101
I stopped the second restore with the same VM ID
I think this is a bug, because a restore should lock the used VM ID
Thank you
Omg... I'm sorry... I have to run omping on all the node at the same time.... it's working!
Anyway I reinstall from scratch and created the new cluster with separate network for corosync and with all names in /etc/hosts
Thank you for your support!
Thank you gulets for the dataset tip! I's a good workaround for now!
About the zfs send from replicated dataset, I tried with pve-zsync --source vm-image and with zfs send without success because, as I understand, zfs needs that the receiving last snapshot is the same of the source.
From Oracle...
Don't use swap with zfs.
I removed the swap partition and using zram, it's working fine.
You need of course a server with extra ram when using with zfs
You can lower the zfs arc cache to limit the RAM usage of zfs (default is 8GB) but you will loose performances
Hi Valerio,
you don't need to shrink root partition, because 85 GB is the total space of zfs pool.
This space will be uset for VMs too in thin provisioning
Ok, I just learned that I can't replicate from a replicated resource, it's a zfs behavior :-)
So now the problem is that pve-zsync doesn't create consistent VM snapshot with 2 or more disks.
Will you implement this feature in next releases?
Thank you again
anyway I put all nodes in all hosts files.
meantime cluster begun irresponsive because of /etc/pve filesystem was blocked.
I rebooted all nodes and now it's all ok.
I noticed that omping doesn't work, maybe this te cause?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.