performance issues

  1. M

    VM disk performance

    HI all, I suspect this has been covered a million times but I wanted to post with my config to see if people can point me in the right direction. We have a couple of high disk I/O servers - Netxms and another similar style tool - we are getting errors on the servers and we were told to review...
  2. X

    Ceph Cluster performance

    Hi All, I've a ceph Cluster with 3 nodes HPE each with 10xSAS 1TB and 2xnvme 1TB below the config. The replica and ceph network is 10Gb but the performance are very low... in VM I got (in sequential mode) Read: 230MBps Write: 65MBps What I can do/check to tune my storage environment? # begin...
  3. N

    5800X Horrible Performance

    Hey, so I have a 5800x and I did a KVM for 2 clients to host their game servers. I followed all the guides here and I have all the drivers and OS is Windows Server 2019 Desktop. Their current KVM settings are: cache writeback, ballooning enabled, qemu guest enabled etc... I want both of...
  4. A

    Proxmox and VM performance are too slow, Linux VM Taking 3-4 hours and Windows VM 7-8 hours to bootup

    Hello, I installed a new setup in my server. (Unfortunately old one crashed completely). I had to rebuild it from scratch. In capacity, it's 2tb HDD SATA + 500 GB SSD, with 64GB RAM. I followed the steps as per documents. I selected Ext4 for my installation. Surprisingly, once I loaded vm's. It...
  5. F

    [SOLVED] Samsung 870 QVO 1TB Terrible Write Performance

    Hi Everyone, I got 2x Samsung 870 QVO 1TB. Iknow iknow, they are not the best and can be really slow and the lifespan isn't great. My aim was to replace my current 8x 146 GB HDD setup, as I wanted to reduce power consumption. I installed Proxmox on them with a ZFS mirror and the performance...
  6. D

    Consumer-SSDs im RAID 1 mit sehr schlechter Schreibperformance

    Hallo, ich habe folgendes Problem. Wir haben ein "Budget" Node mit 3x Samsung 860 QVO 1TB. Eine Disk wird für das System an sich verwendet und ein Software RAID 1 für den VM Storage. /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 ...
  7. A

    Problem mit Ceph Storage (unter anderem kein klonen einer VM möglich)

    Hallo, wir haben einen Proxmox Cluster mit 3 Nodes installiert. Jede Maschine hat 128 GB RAM und 2 ssd für das Proxmox System installiert. Zusätzlich sind in jeder Maschine 2 physikalische CPU mit je mindestens 6 Cores installiert. Für das CEPH System sind pro Server je 2 SSD's der...
  8. maxprox

    [SOLVED] Performanceprobleme, NIC Settings zur Performancesteigerung

    Hallo, EDIT: falls das jemand interessiert, würde ich schnell zu Post #16 springen ;-) Problem: Eine Medizinsoftware, die auf einem neue aufgesetzten Windows 2019 Std. Server läuft, macht Performanceprobleme. Die Software wird per Verknüpfung auf den Windows 10 Clients morgens gestartet und...
  9. J

    CEPH 4k random read/write QD1 performance problem

    Hi After days of searching and trying different things I am looking for some advice on how to solve this problem :). The performance issue persists in the VM's on Proxmox. They have some old software running on them that requires good random 4K read/write performance and this therefore has a...
  10. S

    The guest operating system kills the performance of the disk subsystem.

    Good afternoon. I ask for help. Installation: Proxmox 6, Xeon 2665, 128Gb REG ECC RAM, ZFS, 16Gb ARC, 4 x 600Gb Enterprise 10kSAS HDD in RAIDZ-1. root@virt:/mnt/zfs/RZHDD/vmstore# zpool status -v pool: RZHDD state: ONLINE scan: none requested config: NAME...
  11. Proxygen

    [SOLVED] PVESTATD High CPU Usage During MDADM Sync

    syncing a newly create mdadm raid 1 (WD Red disks, 1.6T partition size, default sync speed limits, internal bitmap enabled) gets CPU load in the 2 to 2.5 range. machine gets sluggish (despite the Xeon E-2136, 6 cores, 12 threads and 32GB RAM) stopping pvestatd lowers the load to ~1. There is...
  12. P

    Slow VM clone on ZFS

    For some reason, cloning a VM (on the same storage) takes really long all of a sudden. It used to take ~30 seconds or so to clone a ~16GB VM. Now, it suddenly takes >10 minutes. AFAIK nothing about the host was changed. Also, the cloning itself is still really fast (at least according to the job...
  13. U

    lousy performance

    Hi Forum Just discovered Proxmox VE - and have installed on a IBM system 3850 X5 I have 4x8C 2000ghz Xeon CPU - and 512GB RAM in it. I've installed the Proxmox on a RAID1 with zfc filesystem I have now installed 3 different VM ( 3 x MySQL DB for test enviorment) BUT i'm experience a lack of...
  14. Z

    Schlechte Performance ZFS

    Hallo, es ist eine Never ending Story mit meinem System und ZFS. Ich betreibe einen Proliant Microserver G8 mit 16GB Ram und 3x4TB im RaidZ1, Ashift=12, lz4 Im Windows Guest, den ich als Fileserver verwende kann ich über SMB die ersten 10s 130MB/s erreichen, dann fällt es ab auf etwa 10-30MB/s...
  15. M

    Strange disk performence in Windows guest

    Hello everyone! I have proxmox home lab, and now I try to choose filesystem for my virtual machines. I will run Windows VMs and I need better filesystem for that. I setup Windows VM with this config: agent: 1 balloon: 0 bootdisk: virtio0 cores: 4 cpu: host memory: 4096 name: Test-IOPS net0...
  16. R

    Upgrade 4.4 auf 5.2 - IO-Performance Probleme

    Hallo, seit dem Upgrade von Proxmox 4.4 auf 5.2 haben wir das Problem, dass das IO-Delay bei Festplatten-Zugriffen extrem hoch ist und sogar virtuelle Maschinen Prozesse abbrechen. Wir haben ein Cluster mit 4 Hosts (2x E5-2670v3) mit je 256 GB RAM und ZFS (4 striped mirror) ohne SSD-Cache...
  17. Laurent Minne

    RDP performance

    Hello everyone, My testing Config : - AMD Ryzen 1600X - Asus Prime AB 350 Plus - 4 x 8Go Crucial 2400Mhz - 4 x MX500 Crucial 250Go in Raidz1 (/root & VM-Images only) - 2 x 4To in raidz1 (vzdump, backup & ISO storage) - Dual NIC 1Gbps No general performance problem on the side of the Proxmox...
  18. C

    Ubuntu Xenial/Bionic kernel 4.13/4.15 virtio-scsi direct sync 4k write performance regression

    Hi Proxmoxers, Just found out today that when using Ubuntu Xenial kernel 4.13, the direct synchronous write performance is 93% slower than on kernel 4.4. Kernel 4.13 direct sync 4KB write could only achieve 303KB/s while kernel 4.4 could achieve 4.5MB/s. The VM is using ext4 filesystem and...
  19. S

    ZFS RAID Z1 Pool langsam

    Hallo, Ich habe vor einigen Monaten meinen ersten Proxmox Node aufgesetzt, der hauptsächlich als Homeserver und teils für VMs genutzt werden sollte. Spezifikationen sind die folgenden: CPU: Xeon E3-1240L v3 RAM: 4x 8GB DDR3 1600Mhz SSD: 4x Samsung EVO 850 500GB Nun tritt leider das Problem...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!