Search results

  1. D

    pvesr status hanging after upgrade from 5.0 to 5.1

    Of course wolfgang, There are no zfs processes on all nodes, sender and receiver. The replication tab still does not work and hangs the pve-daemon process. I have to restart it now.
  2. D

    pvesr status hanging after upgrade from 5.0 to 5.1

    Hi wolfgang. zfs 0.6.5.11-pve17~bpo90 uname -a Linux pvez30 4.10.17-2-pve #1 SMP PVE 4.10.17-20 (Mon, 14 Aug 2017 11:23:37 +0200) x86_64 GNU/Linux I did not touch anythink, it just stopped Now I did disable pvesr, waiting to do more tests. Thank you
  3. D

    3 Nodes cluster GUI Login Failed on node 2

    I had same issue, pve-daemon did hang because of pvesr malfunction. I saw one or more processes pve-daemons running at 100% cpu, solved with systemctel restart pve-daemon. You should check the system date too.
  4. D

    pvesr status hanging after upgrade from 5.0 to 5.1

    Same for me, I have a 4 node cluster, PVE 5.0 running fine for over a month, yesterday at 21:50 replication stopped. Last logs on zfs receiver: 2017-10-28.20:06:23 zfs recv -F -- rpool/data/vm-102-disk-1 2017-10-28.20:06:28 zfs destroy rpool/data/vm-102-disk-1@__replicate_102-0_1509188701__...
  5. D

    Are windows kvm guest working on pve 5.1?

    Hello, I'm planning to upgrade from 5.0 to 5.1 but I read of the blue screen problem on windows kvm guest. Are windows kvm working for someone with PVE 5.1? If yes, please post your cpu model. Surely I will do some test too and will share it. Thank you
  6. D

    Blue screen with 5.1

    Can you please post the cpu type? please post the output of: cat /proc/cpuinfo
  7. D

    PVE 5.1: KVM broken on old CPUs

    Yes, if I did understand well, the problem was on CPUs with no virtual nmi support... Can you check with command: cat /proc/cpuinfo | grep nmi you should get no output if your CPU doesn't support virtual nmi Just for curiosity. Thank you
  8. D

    PVE 5.1: KVM broken on old CPUs

    With or without the patch above? Thank you!
  9. D

    Storage Replication regularly stops

    The backup doesn't take snapshot at storage level, it uses a kvm function to freeze the VM and intercept writes during backup. I don't use backups because I also use pve-zsync nightly to a remote storage, having snapshots for the last x days.
  10. D

    PVE 5.1: KVM broken on old CPUs

    It seems neither AMD Opteron 6xxx supports vnmi
  11. D

    Storage Replication regularly stops

    Hi, My 4 nodes cluster works fine with zfs replication. Even after reboot of a target machine. I'm using kvm
  12. D

    Proxmox 5.1 : zfs sync error

    Hi dea, Did you upgrade from 5.0 or it's a new install? I have a cluster with ZFS replica and I'm planning to upgrade to 5.1...
  13. D

    Found bug on restore

    Hi, I started a restore on node 1 with new VM ID 101 Then I started another restore on node 2 and the first free VM ID was 101 I stopped the second restore with the same VM ID I think this is a bug, because a restore should lock the used VM ID Thank you
  14. D

    Problem adding third node to cluster

    Omg... I'm sorry... I have to run omping on all the node at the same time.... it's working! Anyway I reinstall from scratch and created the new cluster with separate network for corosync and with all names in /etc/hosts Thank you for your support!
  15. D

    pve-zsync snapshots with multiple disks problem

    Thank you gulets for the dataset tip! I's a good workaround for now! About the zfs send from replicated dataset, I tried with pve-zsync --source vm-image and with zfs send without success because, as I understand, zfs needs that the receiving last snapshot is the same of the source. From Oracle...
  16. D

    Host swapping when it shouldn't

    Don't use swap with zfs. I removed the swap partition and using zram, it's working fine. You need of course a server with extra ram when using with zfs You can lower the zfs arc cache to limit the RAM usage of zfs (default is 8GB) but you will loose performances
  17. D

    Resize rpool/ROOT/pve-1 live

    Hi Valerio, you don't need to shrink root partition, because 85 GB is the total space of zfs pool. This space will be uset for VMs too in thin provisioning
  18. D

    pve-zsync snapshots with multiple disks problem

    Ok, I just learned that I can't replicate from a replicated resource, it's a zfs behavior :-) So now the problem is that pve-zsync doesn't create consistent VM snapshot with 2 or more disks. Will you implement this feature in next releases? Thank you again
  19. D

    Problem adding third node to cluster

    anyway I put all nodes in all hosts files. meantime cluster begun irresponsive because of /etc/pve filesystem was blocked. I rebooted all nodes and now it's all ok. I noticed that omping doesn't work, maybe this te cause?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!