Search results

  1. C

    Proxmox V6 Servers freeze, Zvol blocked for more than 120s

    So how to spot a deadlock? Is the zvol kernel call trace a deadlock as shown in above post at https://forum.proxmox.com/threads/proxmox-v6-servers-freeze-zvol-blocked-for-more-than-120s.57765/page-3#post-276627 a deadlock? For your information, after I set the VM disk cache mode to default (no...
  2. C

    Proxmox V6 Servers freeze, Zvol blocked for more than 120s

    Is there any chance you are using writeback cache in VM the zfs zvol local storage? And have you tried disabling CPU C6 state and only enabling C0/C1/C1E in the BIOS?
  3. C

    Proxmox V6 Servers freeze, Zvol blocked for more than 120s

    Thanks for the link. Did it solve your issue? And does the newer version introduce more bugs? If it's stable enough, I'm willing to give it a try next Monday. I also just disabled the Xeon E5 V4 CPU C6 C-states in BIOS right now. Let's see in the weekend whether that helps or not.
  4. C

    Proxmox V6 Servers freeze, Zvol blocked for more than 120s

    ZFS 0.8.1 the one which came with the iso installer. I used secondary zpool mirror (raid1) which are using 1TB enterprise SSD disks and I could perform zpool scrub to the pool just fine. It's only when it's idling around 2-3am and the kernel and the zfs zvol froze. # pveversion -v...
  5. C

    Proxmox V6 Servers freeze, Zvol blocked for more than 120s

    Is this issue fixed yet? I'm having the same problem on Proxmox VE 6.0.4 (iso installer) where the kernel froze when accessing zvol on secondary zpool on ssd disks.
  6. C

    Does Proxmox support Host OS Resource Reservation?

    Hi All, Does Proxmox support host os resource reservation so it won't overcommit memory? Like on Microsoft Hyper-V and Citrix XenServer, they have dedicated resources for host OS especially RAM, so it will block VM creation or migration when the new VM will use more than free available...
  7. C

    Maximum VM Disk Size for Ceph RBD

    I see thanks for the links. The concern with CephFS is its direct and/or synchronous write throughputs are terribly slow compared with RBD on all HDD OSDs. Also, CephFS quota on kernel client only works on kernel 4.17 or above, and only on Mimic Ceph cluster or above.
  8. C

    Maximum VM Disk Size for Ceph RBD

    Hi Proxmoxers, I could not find the docs regarding maximum single RBD size for KVM. Could you please shed a light on the maximum size of a single Ceph RBD disk which Proxmox could handle? Is it safe to allocate a single disk in Ceph RBD with size like 100TB or more for storing backup files?
  9. C

    Proxmox VE Ceph Benchmark 2018/02

    Convert from filestore to bluestore might help reducing the double write penalty.
  10. C

    Proxmox VE Ceph Benchmark 2018/02

    Will there be fio synchronous write benchmark inside a VM running on top of Proxmox and Ceph? Would love to compare numbers. Is 212 IOPS for synchronous fio 4k write test on a VM acceptable? I know Samsung SM863a SSD could push 6k IOPS as local storage.
  11. C

    [SOLVED] Slow access to pmxcfs in PVE 5.2 cluster

    Using tcpdump to troubleshoot corosync issue, I managed to find some old Proxmox nodes which where removed from the cluster but got turned on again accidentally. Removing the old nodes without reinstalling as documented in Proxmox Cluster Manager documentation fixed the issue.
  12. C

    [SOLVED] Slow access to pmxcfs in PVE 5.2 cluster

    Hi Stoiko, The omping test successfully without packet loss on unicast, and the latency is below 0.2ms. Reducing the cluster down to 6 nodes did not help. Could be the frequent membership reformed causing the slow pmxcfs access? The corosync and pve-cluster journal are as follows. Dec 21...
  13. C

    [SOLVED] Slow access to pmxcfs in PVE 5.2 cluster

    I see. Thanks for the limitation information. We have similar setup and do not have issue. I'll give omping a try then. The corosync is running on top of ovs bridge + lacp bond. Unfortunately, adding new NIC is not an option.
  14. C

    [SOLVED] Slow access to pmxcfs in PVE 5.2 cluster

    Hi Stoiko, thanks for the advice. Due to network constraint, we are using custom corosync configuration with UDPU to support for more than 16 nodes. We are also using mixed cluster PVE 5.0 and PE 5.2. From the corosync journal, I could only found one node is flapping and kept rejoining the...
  15. C

    [SOLVED] Slow access to pmxcfs in PVE 5.2 cluster

    Hi Proxmoxers, What could causing slow access (read and write) to pmxcfs which is mounted in /etc/pve in PVE 5.2 cluster? For testing, It takes more than 10 seconds to create an empty file inside /etc/pve. There are no performance issue on the local storage and confirmed by mounting the pmxcfs...
  16. C

    Proxmox CPU model kvm64 with PCID and AES flags

    Awesome, thanks for testing. Does the default kvm64 cpu type AES performance the same as host cpu type as well? I noticed there's a good amount of reduction in the cpu utilization on nginx load balancer VM when using cpy type host compared with kvm64.
  17. C

    Proxmox CPU model kvm64 with PCID and AES flags

    is this on latest pve 5.2 and only from manual editing the vm conf?
  18. C

    [SOLVED] qm option to reattach unused disks

    Thanks Dietmar, Digging through the "qm" manual, I ended up using "qm set" instead. The "qm rescan" did not reattach the unused disk as useable again. qm set <VMID> --scsi[n] local-zfs:vm-<VMID>-disk-[n],discard=on

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!