So how to spot a deadlock?
Is the zvol kernel call trace a deadlock as shown in above post at https://forum.proxmox.com/threads/proxmox-v6-servers-freeze-zvol-blocked-for-more-than-120s.57765/page-3#post-276627 a deadlock?
For your information, after I set the VM disk cache mode to default (no...
Is there any chance you are using writeback cache in VM the zfs zvol local storage?
And have you tried disabling CPU C6 state and only enabling C0/C1/C1E in the BIOS?
Thanks for the link. Did it solve your issue? And does the newer version introduce more bugs? If it's stable enough, I'm willing to give it a try next Monday.
I also just disabled the Xeon E5 V4 CPU C6 C-states in BIOS right now. Let's see in the weekend whether that helps or not.
ZFS 0.8.1 the one which came with the iso installer.
I used secondary zpool mirror (raid1) which are using 1TB enterprise SSD disks and I could perform zpool scrub to the pool just fine. It's only when it's idling around 2-3am and the kernel and the zfs zvol froze.
# pveversion -v...
Is this issue fixed yet? I'm having the same problem on Proxmox VE 6.0.4 (iso installer) where the kernel froze when accessing zvol on secondary zpool on ssd disks.
Hi All,
Does Proxmox support host os resource reservation so it won't overcommit memory?
Like on Microsoft Hyper-V and Citrix XenServer, they have dedicated resources for host OS especially RAM, so it will block VM creation or migration when the new VM will use more than free available...
I see thanks for the links.
The concern with CephFS is its direct and/or synchronous write throughputs are terribly slow compared with RBD on all HDD OSDs. Also, CephFS quota on kernel client only works on kernel 4.17 or above, and only on Mimic Ceph cluster or above.
Hi Proxmoxers,
I could not find the docs regarding maximum single RBD size for KVM.
Could you please shed a light on the maximum size of a single Ceph RBD disk which Proxmox could handle?
Is it safe to allocate a single disk in Ceph RBD with size like 100TB or more for storing backup files?
Will there be fio synchronous write benchmark inside a VM running on top of Proxmox and Ceph? Would love to compare numbers.
Is 212 IOPS for synchronous fio 4k write test on a VM acceptable? I know Samsung SM863a SSD could push 6k IOPS as local storage.
Using tcpdump to troubleshoot corosync issue, I managed to find some old Proxmox nodes which where removed from the cluster but got turned on again accidentally. Removing the old nodes without reinstalling as documented in Proxmox Cluster Manager documentation fixed the issue.
Hi Stoiko,
The omping test successfully without packet loss on unicast, and the latency is below 0.2ms.
Reducing the cluster down to 6 nodes did not help. Could be the frequent membership reformed causing the slow pmxcfs access?
The corosync and pve-cluster journal are as follows.
Dec 21...
I see. Thanks for the limitation information.
We have similar setup and do not have issue. I'll give omping a try then. The corosync is running on top of ovs bridge + lacp bond.
Unfortunately, adding new NIC is not an option.
Hi Stoiko, thanks for the advice.
Due to network constraint, we are using custom corosync configuration with UDPU to support for more than 16 nodes. We are also using mixed cluster PVE 5.0 and PE 5.2. From the corosync journal, I could only found one node is flapping and kept rejoining the...
Hi Proxmoxers,
What could causing slow access (read and write) to pmxcfs which is mounted in /etc/pve in PVE 5.2 cluster?
For testing, It takes more than 10 seconds to create an empty file inside /etc/pve. There are no performance issue on the local storage and confirmed by mounting the pmxcfs...
Awesome, thanks for testing.
Does the default kvm64 cpu type AES performance the same as host cpu type as well? I noticed there's a good amount of reduction in the cpu utilization on nginx load balancer VM when using cpy type host compared with kvm64.
Thanks Dietmar,
Digging through the "qm" manual, I ended up using "qm set" instead. The "qm rescan" did not reattach the unused disk as useable again.
qm set <VMID> --scsi[n] local-zfs:vm-<VMID>-disk-[n],discard=on
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.