performace

  1. T

    Upgrading CPU decreased performance

    I upgraded my cpu from 3900x to 4950x. And now I get screen tearing with my Ubuntu20 guest. I have 128GB ram 60 GB dedicated to this VM. Tried with another VM and have similar problem.
  2. T

    VM I/O Performance with Ceph Storage

    Hi everybody, a while ago we set up a three node Proxmox cluster and as storage backend we use the built in ceph features. After a while we noticed a strong decreasing of I/O Perfomance in the VMs when it comes to writes of small files. Writing a single big file at once seems to perform quit...
  3. X

    [SOLVED] Choice of storage type for VMs (NVME, SATA SSD, HDD)

    Hello all, I am pretty new here and I am building my first ever Homelab to run proxmox. My main use cases will be: - Running development VMs for my personal projects: mainly Backend Development and Machine Learning - A Plex server: I plan to attach my library from a NAS to a VM to keep storage...
  4. bbgeek17

    [TUTORIAL] Proxmox VE 7.2 Benchmark: aio native, io_uring, and iothreads

    Hey everyone, a common question in the forum and to us is which settings are best for storage performance. We took a comprehensive look at performance on PVE 7.2 (kernel=5.15.53-1-pve) with aio=native, aio=io_uring, and iothreads over several weeks of benchmarking on an AMD EPYC system with 100G...
  5. B

    Slow network

    I have a proxmox host with two built in 10G network cards. It is a new install so it isn't doing much, I have a Linux Ubuntu VM and a TrueNAS, netiher in production. I started to test performance on my TrueNAS and noticed it was really slow. 29MB/s on my primary nic (1G), where as testing the...
  6. F

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    Hi, till last Wednesday we had a cute high performing litte CEPH cluster running on PVE 6.4. Then i started the upgrade to Octopus as given in https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus. Since we did an online upgrade, we stopped the autoconvert with ceph config set osd...
  7. B

    How to adjust the qcow2 cluster size of existing images to drastically improve I/O performance?

    Hi I've just read this interesting blog article on jrs-s.net (he have a lot of great articles on ZFS by the way) on the topic of qcow2 vs zvol I/O performance. The interesting part was that fine-tuning the qcow2 cluster_size to match the record size ot the underlying zfs dataset could...