performance

  1. Performance issue with Ceph under Proxmox 6

    Hi community, we have a server cluster consisting of 3 nodes with EPYC 7402P 24-Core CPUs and 6 Intel Enterprise SSDs (4620) and 256GB RAM each. Also we have a 10Gbits NIC for Ceph. SSD performance alone is fine, Jumbo frames are enabled and also iperf gives resonable results in terms of...
  2. mir

    openvswitch vs linux bridge performance

    Anybody here aware of performance comparison tests made lately between openvswitch and linux bridge?
  3. Does Proxmox need fast storage?

    Hi, I have a simple question which I would like to share with because I’m interested in you point of view. On a Proxmox server I have fast storage (SSDs or NVMe) and slow storage (SAS or SATA). My fast choice could be to install Proxmox on fast storage and use it also for storing virtual...
  4. Gaia

    [SOLVED] PVESTATD High CPU Usage During MDADM Sync

    syncing a newly create mdadm raid 1 (WD Red disks, 1.6T partition size, default sync speed limits, internal bitmap enabled) gets CPU load in the 2 to 2.5 range. machine gets sluggish (despite the Xeon E-2136, 6 cores, 12 threads and 32GB RAM) stopping pvestatd lowers the load to ~1. There is...
  5. Improve VM guest disk performance (Ceph, 10 GBE, Qemu, Virtio)

    Hello @all, we are running a Proxmox cluster with five nodes. Three of them are used for ceph, providing 2 pools, one with hdd, the other one with ssd. The two other nodes are used for virtualization with qemu. We have redundant 10 GBE storage networks and we have redundant 10 GBE ceph networks...
  6. Any performance difference: Node Local storage vs Shared storage

    I am new here so please forgive any forum faux pas and let me know so I don't keep doing it :-) Also, I am originally from a Windows Hyper-V background so please feel free to correct terminology mistakes. I am setting Proxmox up on a single physical server with 3 raid arrays (it has a HW RAID...
  7. Performance issues with OpenMediaVault VM (ZFS/LUKS...)

    Hi For a home server/nas I'm using the latests versions of Proxmox (5.4) and OMV (4.1.22-1) on recent hardware (core i3-8100, 16gb of ram, installed on a ssd...). I have only one 8TB hard-drive with no raid configuration for my data storage. I use my previous server (intel Atom J1900, 8GB of...
  8. Offline Migration extremely slow

    Hey, I noticed a huge issue. When I try to migrate a VM to a different node I get extremely slow transfer rates. This is unexpected since I use a dedicated Gigabit network for migration (which is unused except for migrations). Unsecure flag is set aswell. Have a look on this migration log...
  9. HW Raid performance query: LSI 3008

    Hi, I wonder if anyone has experience and can comment maybe. I've just spent some time reviewing a pair of lenovo servers, which have this HW Raid controller. 2 x identical nodes in a small proxmox cluster, proxmox 5.Latest. There is no problem with the controller being recognized and...
  10. How to use Virtio-blk-data-plane in Proxmox instead Virtio-SCSI?

    Currently with VirtIO-SCSI (VirtIO-SCSI Single with threads) have max IOPS is ~1.8k-2.3k. But Virtio-blk-data-plane may have over 100k IOPS https://www.suse.com/media/white-paper/kvm_virtualized_io_performance.pdf May I switch to Virtio-blk-data-plane instead VirtIO-SCSI in Proxmox?
  11. CEPHFS rasize

    Hi , Is there anyway to change read ahead of the cephfs. According : docs.ceph.com/docs/master/man/8/mount.ceph/ and : lists.ceph.com/pipermail/ceph-users-ceph.com/2016-November/014553.html (could not place hyper link - new user) this is should be improve reading single large files. right now...
  12. Base template on SSD drive - how good is the performance improvement?

    Actually the subject says it all... I have a server with ZFS where I have 2xSSDs in mirror where proxmox installation, L2ARC and ZIL reside. Then a bunch of 10k HDDs in a pool where the VMs are running. On a different forum thread I have read that a) using linked clones does not affect...
  13. [SOLVED] Low disk performance in windows vm on optane

    Hi, I've been doing some pre-production testing on my home server and ran into some kind of bottleneck with my storage performance, most notably on my optane drive. When I install a Windows vm with the latest VirtIO drivers the performance is kinda dissapointing. I've tried switching over from...
  14. Performance Problem Proxmox 5.2 with RAIDZ-1 10TB

    Hello I've been looking for the reason for the slowness of my Proxmox server but I have not been able to detect the problem. I have an HP DL160 server with 2 Intel Xeon processors, 32GB RAM DDR4 and 4 4TB hard drives in RAIDZ-1 ZFS (10TB Storagen in local-zfs) I have installed 3 VMs: 1 Ubuntu...
  15. Optimization for fast dump/backup

    Hi all, I've got a couple of Proxmox servers (single no HA) with SATA drives and getting a backup is a real pain. It takes 50 minutes or so to get a 60-70GB of backup. I'm planning to migrate a few of these servers into one new much more powerful server with the following drives/setup: 4 x...
  16. Benchmark: ZFS vs mdraid + ext4 + qcow2

    After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
  17. PVE kernel limits outgoing connections

    All my servers with PVE kernel enabled show poor outgoing connections performance: ab -n 10000 -c 1000 example.url 4.4.21-1-pve Requests per second: 679.38 4.4.35-1-pve Requests per second: 754.42 4.13.13-5-pve Requests per second: 692.04 4.13.13-6-pve Requests per second...
  18. What is best performing file system for Proxmox 5.2 on 4 x NVMe SSD Drives?

    Good day all, Hardware Specs: Dell PowerEdge R630, Dual (2) Intel Xeon 8 Core E5-2667 v3 CPUs, 3.2 Ghz, 256 Gigabytes Memory, 2x Intel S3610 SSD fo Proxmox OS (raid 1 over PERC H330 SAS RAID Controller), 4x Intel P4510 series 1 Terabyte U.2 format NVMe SSD (VM storage), front 4 bays configured...
  19. VMs auf anderer Node einspielen

    Hallo zusammen, ich habe eine Frage bezüglich der Performance von VMs. Aktuell laufen ca. sechs Linux KVMs und zwei FreeBSD KVMs auf einer Node mit zwei HDD Platten im Raidverbund. Da beabsichtigt ist in nächster Zeit eine performancemäßig bessere Node zu beziehen, mitunter SSDs im...
  20. Ceph performance

    I need some input on tuning performance on a new cluster I have setup The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes. I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!