Search results

  1. N

    [SOLVED] slow performance caused by ZFS trimming

    This issue has been solved. This thread is for anybody having slow I/O performance and searching for keywords. The cause might be ZFS trimming the rpool. I’m running proxmox version 7.3-1 on a Supermicro A2SDi-8C+-HLN4F with HP SSD EX920 1TB for the rpool. My proxmox node was unresponsive. VMs...
  2. N

    VM I/O errors on all disks

    Same issue here with a Debian container. I could reliably get I/O errors when building a development project inside the container. This caused the root partition to remount read-only. Was also using virtio block over ZFS storage. Upgraded the pve-qemu-kvm_6.1.0-3 package linked by Tom and the...
  3. N

    [SOLVED] pfSense VM slow throughput, high CPU mostly interrupts

    This has been solved - this thread is for anybody else who has the same problem and is searching for keywords. An upgrade to Proxmox 7.0 has tanked the performance of virtualized pfSense 2.5.2. Pre-upgrade a download of 100Mbps used about 20% VCPU and now can't reach that bandwidth even 50Mbps...
  4. N

    PSA -- Do not upgrade to systemd 247

    Recurring theme. Thanks for the workaround.
  5. N

    Proxmox VE 6.0 released!

    Could be memory fragmentation. KVM needs a contiguous piece of memory. If it's fragmented then you'll need to defrag. You could try "sysctl vm.compact_memory=1".
  6. N

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Unfortunately my cluster has failed again even with pve2-test1. The cluster split into two subsets this time. First subset, two nodes. root@hera:~# pvecm status Quorum information ------------------ Date: Wed Aug 7 00:49:29 2019 Quorum provider: corosync_votequorum Nodes...
  7. N

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I'm also seeing a much more stable PVE cluster with http://download.proxmox.com/temp/libknet1_1.10-pve2~test1_amd64.deb installed. I've got a 5 node cluster in a single VLAN and corosync was frequently "splitting" into subsets of nodes, e.g. root@maia# pvecm status Quorum information...
  8. N

    Use case CephFS integration

    Yes, you can do this. I have VM and container images on Ceph RBD and shared network filesystems on CephFS. Here's how I configured mine: enabled metadata servers on three nodes created new pools "fs_data" on HDD (2-repl) and "fs_meta" on SSD (3-repl) created new cephfs using the two pools...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!