performance issues

  1. Desktop Performance mit CPU "Host"

    Hallo zusammen, hatte schonmal jemand das gleiche Problem einer schlechten Desktop performance in Windows, wenn man den CPU Type der VM auf "Host" setzt? Die CPU ist ein Intel Xeon W-2245. Wenn ich die CPU der VM auf KVM64 setze, ist die Performance normal.
  2. 3 node CEPH hyperconverged cluster fragmentation.

    Proxmox = 6.4-8 CEPH = 15.2.13 Nodes = 3 Network = 2x100G / node Disk = nvme Samsung PM-1733 MZWLJ3T8HBLS 4TB nvme Samsung PM-1733 MZWLJ1T9HBJR 2TB CPU = EPYC 7252 CEPH pools = 2 separate pools for each disk type and each disk spliced in 2 OSD's Replica = 3 VM don't do many...
  3. Recommendations: Raid Cards HBA's and Networking

    This has probably been gone over one million times already but time for another round. I did a quick search for the title but didn't find anything that makes much sense for my application in the first 5 threads. I currently have two HP DL380 G7 machines with X5675's 128Gb Ram and consumer drive...
  4. [SOLVED] VMs Linux and Windows very slow and laggy

    Hi guys, I have installed Proxmox VE on the following hardware: AMD B550 Gaming Mini-ITX AMD ryzen 5700g 32GB RAM Corsair Vengeance PRO SSD Samunsg 980 as OS storage SSD Crucial 1TB as VMs storage Following qm confid ID command for both Linux (ID: 100) and Windows (ID: 101) agent: 0 balloon...
  5. [Proxmox 7, Windows Guest] Is this kind of HAL usage normal, if not what can we do?

    The organization I work for is currently comparing VM overhead between Proxmox 7 and Hyper-V after calls that the Proxmox hosted terminal server had slowdowns. While I did not manage to catch the exact cause of the slowdowns in time in my analysis I did notice the observation that the reported...
  6. High Disk Latency in Windows Guest

    I'm running an HP ProLiant DL380 G9 with fully updated firmware and PVE 7, and for some reason, no matter what settings I use, I get a very high disk sec/transfer, which is an indicator of disk latency. The guest, Windows Server 2019, took half an hour to install in Proxmox, and took ages to...
  7. PVE Backup to NFS Share (TrueNAS) has poor performance

    Hi, I'm new to proxmox and started my first backup to a NFS share (TrueNAS Server) via 1G NIC. However, the backup process takes a lot of time, because the network performance mostly idles at a few KiB/s and suddenly jumps to 117MiB/s (max Performance) only for a few seconds. My Proxmox Server...
  8. Is the performance of this host ok or bad?

    Hello, I think we have a problem with the disk performance of our Proxmox 7.0 server as a "Windows Server 2012 R2" guest feels sluggish and I guess the values from pveperf should be better. pveperf with running VM (just one on this node): CPU BOGOMIPS: 128001.60 REGEX/SECOND...
  9. Poor disk performance

    Hi, I have a dell r910: 2 x Intel E7-4860 10core 20 thread 256 gig ram Dell H200 HBA controller (factory firmware) 3 x Samsung 4TB SSD. I am getting abysmal disk performance, struggling to get past 70-100MBps Disks are capable of 500+ I would have thought 300+MBps should be achievable. No...
  10. Unresponsive vm during high disk usage

    Hi, I've set up a debian VM with the following config: agent: 1 balloon: 2048 boot: order=scsi0;net0 cores: 8 memory: 16384 name: Docker net0: virtio=16:D0:54:56:5D:02,bridge=vmbr0 net1: virtio=02:75:60:E0:2E:17,bridge=vmbr1 numa: 0 onboot: 1 ostype: l26 scsi0...
  11. Slow Performance on Server 2016

    I'm having severe slowness on a VM with Server 2016. I'm not familiar with Proxmox so any advice would help. The VM is running SQL & RemoteApp. There are about 30 users RDPing into this server and I've thrown all the resources I can at it. The server and the programs on the server just crawl...
  12. Linstor Performance/Skalierungs Problem

    Guten Tag Zusammen, aktuell setzen wir ein Proxmox 6.4-4 zusammen mit Linstor 1.7.1 (DRBD 9.0.28) als verteilten Block-Storage für VM Disk-Images über 7 PVE Nodes ein. Uns ist aufgefallen, dass mit steigender Anzahl an Ressourcen und PVE-Nodes die Performance von Linstor erheblich...
  13. Pretty bad performance, not blaming PVE but my configuration...

    Hi there, Disclaimer: I have never had any interaction with a server virtualization management platform and I just wanted a cool software I could use to run multiple VMs, I know nothing about what a ZFS or LVM drive is, I/O delay... etc. I just have experience working with host OS like Ubuntu...
  14. Ceph Geschwindigkeit als Pool und als KVM

    Hallo! Wir haben einen Ceph-Cluster mit 2 Pools ( SSDs und NVMe's ). In einem Rados benchmark Test ist wie zu erwarten der NVME Pool viel schneller als der SSD Pool . NVMe Pool: Schreiben: BW 900 MB/s IOPS: 220 Lesen: BW 1400 MB/s IOPS: 350 SSD Pool: Schreiben: BW 190 MB/s IOPS: 50...
  15. [SOLVED] Low performance in Cyberpunk 2077 and Photoshop(fonts only)

    Hi!, I have some performance issues with Cyberpunk 2077 game, it has only 18-30 FPS / 10% cpu/gpu loading with GTX 1080 and 8 Ryzen 3700x cores, also Photoshop font transforms is very choppy, on bare metal everything work perfectly, any idea what the problem might be :rolleyes:? Configs...
  16. ZFS Performance Questions on HDDs

    Hello, I'm running a Server with 2 x 8 TB HDD and 1 x 240GB SSD Drive with the following config. # zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 0 days 22:10:56 with 0 errors on Sun Apr 11 22:34:58 2021 config: NAME STATE...
  17. CEPH Performance issues after upgrade from 15.2.8 to 15.2.10

    Hello, A few information about the System: Its a Hyperconverged Cluster of 5 Supermicro AS -1114S-WN10RT: 4 of the Servers have: CPU: 128 x AMD EPYC 7702P 64-Core Processor (1 Socket) RAM: 512 GB 1 of the Servers has: 64 x AMD EPYC 7502P 32-Core Processor (1 Socket) RAM: 256 GB Network: All...
  18. VM disk performance

    HI all, I suspect this has been covered a million times but I wanted to post with my config to see if people can point me in the right direction. We have a couple of high disk I/O servers - Netxms and another similar style tool - we are getting errors on the servers and we were told to review...
  19. Ceph Cluster performance

    Hi All, I've a ceph Cluster with 3 nodes HPE each with 10xSAS 1TB and 2xnvme 1TB below the config. The replica and ceph network is 10Gb but the performance are very low... in VM I got (in sequential mode) Read: 230MBps Write: 65MBps What I can do/check to tune my storage environment? # begin...
  20. 5800X Horrible Performance

    Hey, so I have a 5800x and I did a KVM for 2 clients to host their game servers. I followed all the guides here and I have all the drivers and OS is Windows Server 2019 Desktop. Their current KVM settings are: cache writeback, ballooning enabled, qemu guest enabled etc... I want both of...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!