performance

  1. Proxygen

    [SOLVED] PVESTATD High CPU Usage During MDADM Sync

    syncing a newly create mdadm raid 1 (WD Red disks, 1.6T partition size, default sync speed limits, internal bitmap enabled) gets CPU load in the 2 to 2.5 range. machine gets sluggish (despite the Xeon E-2136, 6 cores, 12 threads and 32GB RAM) stopping pvestatd lowers the load to ~1. There is...
  2. M

    Improve VM guest disk performance (Ceph, 10 GBE, Qemu, Virtio)

    Hello @all, we are running a Proxmox cluster with five nodes. Three of them are used for ceph, providing 2 pools, one with hdd, the other one with ssd. The two other nodes are used for virtualization with qemu. We have redundant 10 GBE storage networks and we have redundant 10 GBE ceph networks...
  3. A

    Any performance difference: Node Local storage vs Shared storage

    I am new here so please forgive any forum faux pas and let me know so I don't keep doing it :-) Also, I am originally from a Windows Hyper-V background so please feel free to correct terminology mistakes. I am setting Proxmox up on a single physical server with 3 raid arrays (it has a HW RAID...
  4. D

    Performance issues with OpenMediaVault VM (ZFS/LUKS...)

    Hi For a home server/nas I'm using the latests versions of Proxmox (5.4) and OMV (4.1.22-1) on recent hardware (core i3-8100, 16gb of ram, installed on a ssd...). I have only one 8TB hard-drive with no raid configuration for my data storage. I use my previous server (intel Atom J1900, 8GB of...
  5. L

    Offline Migration extremely slow

    Hey, I noticed a huge issue. When I try to migrate a VM to a different node I get extremely slow transfer rates. This is unexpected since I use a dedicated Gigabit network for migration (which is unused except for migrations). Unsecure flag is set aswell. Have a look on this migration log...
  6. F

    HW Raid performance query: LSI 3008

    Hi, I wonder if anyone has experience and can comment maybe. I've just spent some time reviewing a pair of lenovo servers, which have this HW Raid controller. 2 x identical nodes in a small proxmox cluster, proxmox 5.Latest. There is no problem with the controller being recognized and...
  7. A

    How to use Virtio-blk-data-plane in Proxmox instead Virtio-SCSI?

    Currently with VirtIO-SCSI (VirtIO-SCSI Single with threads) have max IOPS is ~1.8k-2.3k. But Virtio-blk-data-plane may have over 100k IOPS https://www.suse.com/media/white-paper/kvm_virtualized_io_performance.pdf May I switch to Virtio-blk-data-plane instead VirtIO-SCSI in Proxmox?
  8. S

    CEPHFS rasize

    Hi , Is there anyway to change read ahead of the cephfs. According : docs.ceph.com/docs/master/man/8/mount.ceph/ and : lists.ceph.com/pipermail/ceph-users-ceph.com/2016-November/014553.html (could not place hyper link - new user) this is should be improve reading single large files. right now...
  9. P

    Base template on SSD drive - how good is the performance improvement?

    Actually the subject says it all... I have a server with ZFS where I have 2xSSDs in mirror where proxmox installation, L2ARC and ZIL reside. Then a bunch of 10k HDDs in a pool where the VMs are running. On a different forum thread I have read that a) using linked clones does not affect...
  10. M

    [SOLVED] Low disk performance in windows vm on optane

    Hi, I've been doing some pre-production testing on my home server and ran into some kind of bottleneck with my storage performance, most notably on my optane drive. When I install a Windows vm with the latest VirtIO drivers the performance is kinda dissapointing. I've tried switching over from...
  11. F

    Performance Problem Proxmox 5.2 with RAIDZ-1 10TB

    Hello I've been looking for the reason for the slowness of my Proxmox server but I have not been able to detect the problem. I have an HP DL160 server with 2 Intel Xeon processors, 32GB RAM DDR4 and 4 4TB hard drives in RAIDZ-1 ZFS (10TB Storagen in local-zfs) I have installed 3 VMs: 1 Ubuntu...
  12. A

    Optimization for fast dump/backup

    Hi all, I've got a couple of Proxmox servers (single no HA) with SATA drives and getting a backup is a real pain. It takes 50 minutes or so to get a 60-70GB of backup. I'm planning to migrate a few of these servers into one new much more powerful server with the following drives/setup: 4 x...
  13. K

    Benchmark: ZFS vs mdraid + ext4 + qcow2

    After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
  14. H

    PVE kernel limits outgoing connections

    All my servers with PVE kernel enabled show poor outgoing connections performance: ab -n 10000 -c 1000 example.url 4.4.21-1-pve Requests per second: 679.38 4.4.35-1-pve Requests per second: 754.42 4.13.13-5-pve Requests per second: 692.04 4.13.13-6-pve Requests per second...
  15. D

    What is best performing file system for Proxmox 5.2 on 4 x NVMe SSD Drives?

    Good day all, Hardware Specs: Dell PowerEdge R630, Dual (2) Intel Xeon 8 Core E5-2667 v3 CPUs, 3.2 Ghz, 256 Gigabytes Memory, 2x Intel S3610 SSD fo Proxmox OS (raid 1 over PERC H330 SAS RAID Controller), 4x Intel P4510 series 1 Terabyte U.2 format NVMe SSD (VM storage), front 4 bays configured...
  16. P

    VMs auf anderer Node einspielen

    Hallo zusammen, ich habe eine Frage bezüglich der Performance von VMs. Aktuell laufen ca. sechs Linux KVMs und zwei FreeBSD KVMs auf einer Node mit zwei HDD Platten im Raidverbund. Da beabsichtigt ist in nächster Zeit eine performancemäßig bessere Node zu beziehen, mitunter SSDs im...
  17. L

    Ceph performance

    I need some input on tuning performance on a new cluster I have setup The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes. I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's...
  18. S

    Ceph low performance (especially 4k)

    Hello, We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using. So is there any way we can improve with configuration changes...
  19. V

    ZFS pools on NVME

    Hello! I've read a lot of threads about people trying to get full speed from their nvme drives. So i'm wondering how does nvme tuning looks on zfs? I have several PVE appliances with nvme mirrors and now i suspect that they don't reach full speeds.
  20. K

    Non-Linux guest performance degradation after upgrade 4.4 -> 5.2

    Hi all, After upgrade from discontinued PVE 4.4 to 5.2 I faced quite an unpleasant fact, that all my OpenBSD guests have been slowed down dramatically... After some research I found, that Spectre-related patches can affect performance (does that mean, that last pve-kernel for 4.4 didn't have...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!