performance

  1. J

    Spotty Disk Performance with NFS Share

    Running a ProxmoxVE cluster with multiple Ubuntu VMs with the disks hosted on a TrueNAS NFS share. Occasionally one or more of the VMs will experience drastically reduced disk performance and I'm having a hard time pinpointing the cause. pve-manager/7.4-3/9002ab8a (running kernel...
  2. R

    proxmox NUMA static configuration support

    I am struggling to fully understand everything about NUMA, however I feel like I've got a decent understanding of how proxmox is handling it, as I've run many configuration tests over the last few days trying to solve my performance issues. I have a AMD EPYC 7551p (zen 1 architecture, 7001...
  3. T

    Direct USB passthrough is slow, but using SPICE is fast

    Hello, I want to pass a USB device to a VM (namely a USB TV tuner for a TV server). When I do it directly by adding a USB device to the VM with the vendor/device ID, it seems as if the device is sometimes slow to respond. However, when I add a SPICE USB channel, change the display to SPICE...
  4. U

    ESXi->Proxmox, 1G->10G Planning

    Hi, i want to migrate from ESXi (Essentials) to Proxmox and while there i'm thinking about switching to 10G for the main cluster. Reasons: - replication/migration for Proxmox goes faster (only needed for 2-4 VMs right now) - i use CARP for failover alot, so there are machines running on both...
  5. L

    Optimizing Guest CPU Performance

    Running a WIn10 guest with successful GPU passthrough, performance on that front seems decent. But I can't say the same for the CPU. Pretty disappointing, this is what the bare metal results for the host's CPU should be near for Single Thread score: CPU Test Suite Average Results for...
  6. N

    [SOLVED] Single connections to VMs limited to 10mbps, but multi-threaded iperf gets the full bandwidth

    pveversion: # pveversion --verbose proxmox-ve: 7.4-1 (running kernel: 5.15.107-1-pve) pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a) pve-kernel-5.15: 7.4-2 pve-kernel-5.15.107-1-pve: 5.15.107-1 pve-kernel-5.15.104-1-pve: 5.15.104-2 ceph-fuse: 14.2.21-1 corosync: 3.1.7-pve1 criu...
  7. bbgeek17

    [TUTORIAL] Proxmox: iSCSI and NVMe/TCP shared storage comparison

    Hello Everyone. We received excellent feedback from the previous storage performance investigations, particularly the technotes on optimal disk configuration settings (i.e., aio native, io_uring, and iothreads) and the deep dive into optimizing guest storage latency. Several community members...
  8. A

    Webserver VMs with HTTPS loading incredibly slow

    I'm moving from Windows Hyper-V to Proxmox and was excited, but I'm noticing that the websites hosted by these VMs (running Ubuntu Jammy) are loading incredibly slow. So much so that often on the first page load, I get an error 522 from Cloudflare (the sites are proxied through them) and then if...
  9. B

    VM with physical disk attached. Much slower than physical machine...

    Hi, For a migration, I have created a VM as described in https://pve.proxmox.com/wiki/Windows_10_guest_best_practices Than I have attached a physical SSD-disk and configured passtrough, according to https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM) qm set 114 -scsi2...
  10. bbgeek17

    [TUTORIAL] Low latency storage optimizations for Proxmox, KVM & QEMU

    Several customers have asked us how to get the best possible storage latency out of Proxmox and QEMU (without sacrificing consistency or durability). Typically, the goal is to maximize database performance and improve benchmark results when moving from VMware to Proxmox. In these cases...
  11. A

    RadosGW - S3 - Performance Expectations

    I have a relatively small nvme Ceph cluster running on a dedicated 10gb network. RBD and CephFS performance seems to be pretty good at around 500MB per second in various synthetic benchmarks. Performance uploading a 16gb test file to S3 (RadosGW) from a VM is terrible at only 25MB or so...
  12. N

    [SOLVED] slow performance caused by ZFS trimming

    This issue has been solved. This thread is for anybody having slow I/O performance and searching for keywords. The cause might be ZFS trimming the rpool. I’m running proxmox version 7.3-1 on a Supermicro A2SDi-8C+-HLN4F with HP SSD EX920 1TB for the rpool. My proxmox node was unresponsive. VMs...
  13. S

    How to make a best use of an SSD drive in VMs?

    On my proxmox machine I got two SSD drives natively connected to it. First disk is for proxmox OS, the other I want to use in VMs. Basically, I want to use half of this second SSD drive in one VM, and other half in another, preferably with auto-adjusting sizes (in case one VM needs more than a...
  14. CH.illig

    [SOLVED] Performance Optimierung / nicht die erwartete Leistung.

    Ich betreibe 2 Cluster, 1X tolles neues wo alles erwartungsgemäss funktioniert, aber einmal ein günstigeres Setup was in der VM aber eine meiner Meinung nach viel zu schlechte Performance abliefert. Einmal die eingesetzte Hardware und Config: Wir hatten das Synology NAS in Verdacht zur...
  15. G

    Second backup server best practices?

    Hi all, I'm currently running one PBS for my cluster, which stores data on an NFS share backed by an enterprise-grade QNAP storage. Everything went fine until about two weeks ago, when I noticed some scheduled backups were starting to file sometimes. Also, I see that when I browse VM backups...
  16. N

    Performance Ceph-Resources, best practice

    Hallo zusammen! Setup: 9 Nodes, 53 LXC, Ceph-Verbund aus HDDs mit SSDs als Cache, einige Maschinen haben ein lokales ZFS (SSDs) als Cache. Prinzipiell ist die Performance gut und wir können uns nicht beschweren. Leider sind aber Folgebackups recht langsam. Beispiel: LXC foo: 100 GB/400 files...
  17. bbgeek17

    [TUTORIAL] Proxmox VE vs VMware ESXi performance comparison

    Many discussions have compared Proxmox and VMware from a feature perspective, but almost none compare performance. We tested PVE 7.2 (kernel=5.15.53-1-pve) and VMware ESXi 7.0 (update 3c) with a 32 VM workload to see which performs better under load with storage-heavy applications. The results...
  18. aPollO

    VirtIO Block still first choice for disk performance?

    Hi Guys, is VirtIO Block (nicht VirtIO SCSI) still the best choice for performance? I'm talking about a 4 node cluster with around 200 guests on a ceph storage. Cheers,
  19. O

    Low disk subsystem performance

    I've got four HP DL360 G9 servers, which I intend to use in a hyper-converged cluster setup with CEPH. All of them are of the same hardware configuration: two sockets with Intel(R) Xeon(R) CPU E5-2699 v4 processors @ 2.20GHz (88 cores per server in total), 768GiB of registered DDR4 RAM...
  20. L

    How to fence off ceph monitor processes?

    I the continuous process of learning about running an pmx environment with ceph, I came across a note regarding ceph performance: "... if running in shared environments, fence off monitor processes." Can someone explain what is meant by this and how does one achieve this? thanks!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!