1. L

    Problem: PVE 8 migrating a replicated VM causes the VM to get 3-10x less IOPS

    Didn't know how to label this issue really. I couldn't find any similar post on the forums either. Two identical DELL servers, same specs. I am running PVE 8.0.4, updated to latest version. Servers are in a cluster (no QDevice yet, it's being prepared and is in testing at the moment). I have HW...
  2. H

    Low IOPS in LXC conainer as read

    I tested IOPS in LXC container on Debian 12. I don't understand why I got very low IOPS on read operaions ? read - 7k write - 81k "readtest: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32"...
  3. A

    ZFS Pool Optimization

    Hi, I've a PVE box setup with two zfs pools : root@pve:~# zpool status -v ONE_Pool pool: ONE_Pool state: ONLINE scan: scrub in progress since Tue Nov 29 11:48:09 2022 194G scanned at 6.91G/s, 2.67M issued at 97.7K/s, 948G total 0B repaired, 0.00% done, no estimated completion time...
  4. N

    Ceph uses false osd_mclock_max_capacity_iops_ssd value

    Hello everyone, I recently installed Ceph into my 3 node cluster, which worked out great at first. But after a while I noticed that the Ceph Pool would sometimes hang and stutter. Thats when I looked into the Configuration and saw this: I use 3 exactly the same SSDs, checked if every node uses...
  5. G

    [SOLVED] Dynamically set Disk (virtio0) limits using CLI (pvesh/qm)

    Hi, whenever I set disk limits using pvesh or qm command line tools, the setting is not applied immediately. Instead, the new setting turns orange in the web UI meaning that it will be applied at the next reboot. If I set the parameters directly in the web UI, they are immediately applied with...
  6. G

    IO [iops bw] fair balancing between multiple VM on ZFS ZVOL

    Good day. Is there best practice how to configure fair balancing io requests from multiple VM when block device in zpool 100% loaded? Thank you.
  7. 9

    60 IOPS in windows VM with VIRT-IO SCSI, is this normal?

    Hello everyone, I'd like to start by saying that I'm new-ish to ProxMox and especially new to server hardware. I recently got my hands on a HPE ProLiant ML350 Gen9 tower server without storage. The specs of the server are 2x Intel(R) Xeon(R) CPU E5-2620 v4 CPUs, 32gb of ram (16gb per CPU), and a...
  8. C

    CephFS vs VirtIO SCSI Write IOPS

    Hi, I've been testing our Proxmox Ceph cluster and have noticed something interesting. I've been running fio benchmarks against a CephFS mount and within a VM using VirtIO SCSI. CephFS on /mnt/pve/cephfs - root@pve03:/mnt/pve/cephfs# fio --name=random-write --ioengine=posixaio --rw=randwrite...
  9. A

    VMs performing poorly

    Hello there, I'm currently working with resolving some performance issues on a Proxmox installation. There is a ZFS array on the network which is hosting the VMs, and the behavior is strange. Currently, the server is hosting 3 Windows 10 VMs with the configuration shown in the attached image...
  10. S

    FIO Benchmark on the ZVOL. Fast in Proxmox, slow in VM. Same benchmark, same ZVOL

    Hello! Title says it all, but here a details: Host: Proxmox VE 6.3-6 Guest: Ubuntu Server 18.04.2 LTS SSD: ADATA XPG SX8200 Pro 2 TB (two in ZFS mirror) ZVOL: 90GB, thin-provised, sync=disabled Then i benchmarking this ZVOL directly on the Host i get almost maximum performance that this SSD...
  11. J

    [SOLVED] IO Performance TBW Garantie bei SSDs

    Hallo liebes Proxmox Forum! Meine Konfiguration: - Ryzen 7 1700X (8C/16T) - 32GB 3000MHz CL16 DDR4 - 2x 4TB SMR 5400RPM HDD (zfs mirror) - (bald) 2x 1TB Crucial MX500 TBW 360TB 95K IOPS (zfs mirror) - 750W Marken Netzteil Ich frage mich aktuell ob es ein tool gibt um zu sehen wieviel ich auf...
  12. P

    RAIDZ-2 oder RAID10

    Lieben Gruß an die Community, aktuell beschäftige ich mich damit, open media vault als VM laufen zu lassen. Die Festplatte für den Datenspeicher in OMV würde ich als virtuelle Festplatte zur Verfügung stellen wollen. Ich habe 4 x 4 TB, die ich über Proxmox in einem ZFS Pool zusammenfassen...
  13. T

    CEPH & LXC Disk IOP Limits

    Short quick question... :) Is it possible to limit disk IOPS for LXC containers located on CEPH storage, if so, how? Cheers!
  14. I

    Cluster resource usage per VM?

    Hi everyone! as far as I know, proxmox doesn't have a tool which shows overall cluster or at least server resource usage per VM/CT. For example, cluster experiences performance degradation and I want to see if there is any VM/CT which is using to many IOPS and RAM, what would be the easiest way...
  15. S

    ZFS Raid 10 vs SW Raid 10

    Hey guys, im using Proxmox a really long time now, but im still getting confused when "zfs" joins the club. I got a few questions, which i hope some of you can answer. I moved from unRAID in home server use away, because of lesser read and write IO performance, because i plan to nested...
  16. M

    IOPS saturation on Storage

    I have a HP P2000 G3 ISCSi shared storage atached to my proxmox cluster via LVM. When i move a vdisk from local lvm storage (local disk on the nodo) to the shared storage i have an IOPS saturation on it. I tried to limit the interface bandwith via the Datacenter.cfg file but the problem still...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!