This issue has been solved. This thread is for anybody having slow I/O performance and searching for keywords. The cause might be ZFS trimming the rpool. I’m running proxmox version 7.3-1 on a Supermicro A2SDi-8C+-HLN4F with HP SSD EX920 1TB for the rpool.
My proxmox node was unresponsive. VMs...
Same issue here with a Debian container. I could reliably get I/O errors when building a development project inside the container. This caused the root partition to remount read-only. Was also using virtio block over ZFS storage. Upgraded the pve-qemu-kvm_6.1.0-3 package linked by Tom and the...
This has been solved - this thread is for anybody else who has the same problem and is searching for keywords.
An upgrade to Proxmox 7.0 has tanked the performance of virtualized pfSense 2.5.2. Pre-upgrade a download of 100Mbps used about 20% VCPU and now can't reach that bandwidth even 50Mbps...
Could be memory fragmentation. KVM needs a contiguous piece of memory. If it's fragmented then you'll need to defrag.
You could try "sysctl vm.compact_memory=1".
Unfortunately my cluster has failed again even with pve2-test1. The cluster split into two subsets this time.
First subset, two nodes.
root@hera:~# pvecm status
Quorum information
------------------
Date: Wed Aug 7 00:49:29 2019
Quorum provider: corosync_votequorum
Nodes...
I'm also seeing a much more stable PVE cluster with http://download.proxmox.com/temp/libknet1_1.10-pve2~test1_amd64.deb installed. I've got a 5 node cluster in a single VLAN and corosync was frequently "splitting" into subsets of nodes, e.g.
root@maia# pvecm status
Quorum information...
Yes, you can do this. I have VM and container images on Ceph RBD and shared network filesystems on CephFS. Here's how I configured mine:
enabled metadata servers on three nodes
created new pools "fs_data" on HDD (2-repl) and "fs_meta" on SSD (3-repl)
created new cephfs using the two pools...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.