performance

  1. L

    Optimizing Guest CPU Performance

    Running a WIn10 guest with successful GPU passthrough, performance on that front seems decent. But I can't say the same for the CPU. Pretty disappointing, this is what the bare metal results for the host's CPU should be near for Single Thread score: CPU Test Suite Average Results for...
  2. N

    [SOLVED] Single connections to VMs limited to 10mbps, but multi-threaded iperf gets the full bandwidth

    pveversion: # pveversion --verbose proxmox-ve: 7.4-1 (running kernel: 5.15.107-1-pve) pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a) pve-kernel-5.15: 7.4-2 pve-kernel-5.15.107-1-pve: 5.15.107-1 pve-kernel-5.15.104-1-pve: 5.15.104-2 ceph-fuse: 14.2.21-1 corosync: 3.1.7-pve1 criu...
  3. bbgeek17

    [TUTORIAL] Proxmox: iSCSI and NVMe/TCP shared storage comparison

    Hello Everyone. We received excellent feedback from the previous storage performance investigations, particularly the technotes on optimal disk configuration settings (i.e., aio native, io_uring, and iothreads) and the deep dive into optimizing guest storage latency. Several community members...
  4. A

    Webserver VMs with HTTPS loading incredibly slow

    I'm moving from Windows Hyper-V to Proxmox and was excited, but I'm noticing that the websites hosted by these VMs (running Ubuntu Jammy) are loading incredibly slow. So much so that often on the first page load, I get an error 522 from Cloudflare (the sites are proxied through them) and then if...
  5. B

    VM with physical disk attached. Much slower than physical machine...

    Hi, For a migration, I have created a VM as described in https://pve.proxmox.com/wiki/Windows_10_guest_best_practices Than I have attached a physical SSD-disk and configured passtrough, according to https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM) qm set 114 -scsi2...
  6. bbgeek17

    [TUTORIAL] Low latency storage optimizations for Proxmox, KVM & QEMU

    Several customers have asked us how to get the best possible storage latency out of Proxmox and QEMU (without sacrificing consistency or durability). Typically, the goal is to maximize database performance and improve benchmark results when moving from VMware to Proxmox. In these cases...
  7. A

    RadosGW - S3 - Performance Expectations

    I have a relatively small nvme Ceph cluster running on a dedicated 10gb network. RBD and CephFS performance seems to be pretty good at around 500MB per second in various synthetic benchmarks. Performance uploading a 16gb test file to S3 (RadosGW) from a VM is terrible at only 25MB or so...
  8. N

    [SOLVED] slow performance caused by ZFS trimming

    This issue has been solved. This thread is for anybody having slow I/O performance and searching for keywords. The cause might be ZFS trimming the rpool. I’m running proxmox version 7.3-1 on a Supermicro A2SDi-8C+-HLN4F with HP SSD EX920 1TB for the rpool. My proxmox node was unresponsive. VMs...
  9. S

    How to make a best use of an SSD drive in VMs?

    On my proxmox machine I got two SSD drives natively connected to it. First disk is for proxmox OS, the other I want to use in VMs. Basically, I want to use half of this second SSD drive in one VM, and other half in another, preferably with auto-adjusting sizes (in case one VM needs more than a...
  10. CH.illig

    [SOLVED] Performance Optimierung / nicht die erwartete Leistung.

    Ich betreibe 2 Cluster, 1X tolles neues wo alles erwartungsgemäss funktioniert, aber einmal ein günstigeres Setup was in der VM aber eine meiner Meinung nach viel zu schlechte Performance abliefert. Einmal die eingesetzte Hardware und Config: Wir hatten das Synology NAS in Verdacht zur...
  11. G

    Second backup server best practices?

    Hi all, I'm currently running one PBS for my cluster, which stores data on an NFS share backed by an enterprise-grade QNAP storage. Everything went fine until about two weeks ago, when I noticed some scheduled backups were starting to file sometimes. Also, I see that when I browse VM backups...
  12. N

    Performance Ceph-Resources, best practice

    Hallo zusammen! Setup: 9 Nodes, 53 LXC, Ceph-Verbund aus HDDs mit SSDs als Cache, einige Maschinen haben ein lokales ZFS (SSDs) als Cache. Prinzipiell ist die Performance gut und wir können uns nicht beschweren. Leider sind aber Folgebackups recht langsam. Beispiel: LXC foo: 100 GB/400 files...
  13. bbgeek17

    [TUTORIAL] Proxmox VE vs VMware ESXi performance comparison

    Many discussions have compared Proxmox and VMware from a feature perspective, but almost none compare performance. We tested PVE 7.2 (kernel=5.15.53-1-pve) and VMware ESXi 7.0 (update 3c) with a 32 VM workload to see which performs better under load with storage-heavy applications. The results...
  14. aPollO

    VirtIO Block still first choice for disk performance?

    Hi Guys, is VirtIO Block (nicht VirtIO SCSI) still the best choice for performance? I'm talking about a 4 node cluster with around 200 guests on a ceph storage. Cheers,
  15. O

    Low disk subsystem performance

    I've got four HP DL360 G9 servers, which I intend to use in a hyper-converged cluster setup with CEPH. All of them are of the same hardware configuration: two sockets with Intel(R) Xeon(R) CPU E5-2699 v4 processors @ 2.20GHz (88 cores per server in total), 768GiB of registered DDR4 RAM...
  16. L

    How to fence off ceph monitor processes?

    I the continuous process of learning about running an pmx environment with ceph, I came across a note regarding ceph performance: "... if running in shared environments, fence off monitor processes." Can someone explain what is meant by this and how does one achieve this? thanks!
  17. D

    [SOLVED] Can Ryzen 7 3700U run smooth on Proxmox?

    Saw a mini PC with a Ryzen 7 3700U. I know that this processor is outdated, may this device consider small power consumption but big processing power? My main purpose is to install OpenWRT on the Proxmox as a Internet router. May simply learn docker, container and linux, use the virtual machine...
  18. S

    Poor performance over NFS

    Hello all, We have a following Proxmox setup: One storage server, HP DL380P G8, CPU Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz, 32GB RAM Storage Pool assembled on 8xSSD, Samsung SSD 870 QVO 1TB, which are put into the four mirrors with 2 drives each, and then mirrors are setup into the stripe...
  19. D

    Weird performance issues of two disks in Mirror mode - beginner question

    Machine is a reused gaming PC R5 1600 16GB 2133 RAM Gigabyte AB350 Gaming 3 Proxmox installed on a 120GB SATA SSD Disks in questions are two WD Red Plus 4TB (yes CMR variants, wd40efzx) running in Mirror mode. https://i.imgur.com/zlCw5HF.png Drives are assigned to a Ubuntu server VM whose...
  20. O

    [SOLVED] High RAM consumption

    Hello, I'm not sure if this is normal. PVE is reporting almost 50% RAM usage (out of 56 GiB) while all I have running are 2 lxc containers which combined have under 1 GB of RAM allocated. I do have a Windows 10 VM set up but it is stopped and its RAM is from 8 to 24 GB. When I use top to see...