performance

  1. A

    How to use Virtio-blk-data-plane in Proxmox instead Virtio-SCSI?

    Currently with VirtIO-SCSI (VirtIO-SCSI Single with threads) have max IOPS is ~1.8k-2.3k. But Virtio-blk-data-plane may have over 100k IOPS https://www.suse.com/media/white-paper/kvm_virtualized_io_performance.pdf May I switch to Virtio-blk-data-plane instead VirtIO-SCSI in Proxmox?
  2. S

    CEPHFS rasize

    Hi , Is there anyway to change read ahead of the cephfs. According : docs.ceph.com/docs/master/man/8/mount.ceph/ and : lists.ceph.com/pipermail/ceph-users-ceph.com/2016-November/014553.html (could not place hyper link - new user) this is should be improve reading single large files. right now...
  3. P

    Base template on SSD drive - how good is the performance improvement?

    Actually the subject says it all... I have a server with ZFS where I have 2xSSDs in mirror where proxmox installation, L2ARC and ZIL reside. Then a bunch of 10k HDDs in a pool where the VMs are running. On a different forum thread I have read that a) using linked clones does not affect...
  4. M

    [SOLVED] Low disk performance in windows vm on optane

    Hi, I've been doing some pre-production testing on my home server and ran into some kind of bottleneck with my storage performance, most notably on my optane drive. When I install a Windows vm with the latest VirtIO drivers the performance is kinda dissapointing. I've tried switching over from...
  5. F

    Performance Problem Proxmox 5.2 with RAIDZ-1 10TB

    Hello I've been looking for the reason for the slowness of my Proxmox server but I have not been able to detect the problem. I have an HP DL160 server with 2 Intel Xeon processors, 32GB RAM DDR4 and 4 4TB hard drives in RAIDZ-1 ZFS (10TB Storagen in local-zfs) I have installed 3 VMs: 1 Ubuntu...
  6. A

    Optimization for fast dump/backup

    Hi all, I've got a couple of Proxmox servers (single no HA) with SATA drives and getting a backup is a real pain. It takes 50 minutes or so to get a 60-70GB of backup. I'm planning to migrate a few of these servers into one new much more powerful server with the following drives/setup: 4 x...
  7. K

    Benchmark: ZFS vs mdraid + ext4 + qcow2

    After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
  8. H

    PVE kernel limits outgoing connections

    All my servers with PVE kernel enabled show poor outgoing connections performance: ab -n 10000 -c 1000 example.url 4.4.21-1-pve Requests per second: 679.38 4.4.35-1-pve Requests per second: 754.42 4.13.13-5-pve Requests per second: 692.04 4.13.13-6-pve Requests per second...
  9. D

    What is best performing file system for Proxmox 5.2 on 4 x NVMe SSD Drives?

    Good day all, Hardware Specs: Dell PowerEdge R630, Dual (2) Intel Xeon 8 Core E5-2667 v3 CPUs, 3.2 Ghz, 256 Gigabytes Memory, 2x Intel S3610 SSD fo Proxmox OS (raid 1 over PERC H330 SAS RAID Controller), 4x Intel P4510 series 1 Terabyte U.2 format NVMe SSD (VM storage), front 4 bays configured...
  10. P

    VMs auf anderer Node einspielen

    Hallo zusammen, ich habe eine Frage bezüglich der Performance von VMs. Aktuell laufen ca. sechs Linux KVMs und zwei FreeBSD KVMs auf einer Node mit zwei HDD Platten im Raidverbund. Da beabsichtigt ist in nächster Zeit eine performancemäßig bessere Node zu beziehen, mitunter SSDs im...
  11. L

    Ceph performance

    I need some input on tuning performance on a new cluster I have setup The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes. I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's...
  12. S

    Ceph low performance (especially 4k)

    Hello, We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using. So is there any way we can improve with configuration changes...
  13. V

    ZFS pools on NVME

    Hello! I've read a lot of threads about people trying to get full speed from their nvme drives. So i'm wondering how does nvme tuning looks on zfs? I have several PVE appliances with nvme mirrors and now i suspect that they don't reach full speeds.
  14. K

    Non-Linux guest performance degradation after upgrade 4.4 -> 5.2

    Hi all, After upgrade from discontinued PVE 4.4 to 5.2 I faced quite an unpleasant fact, that all my OpenBSD guests have been slowed down dramatically... After some research I found, that Spectre-related patches can affect performance (does that mean, that last pve-kernel for 4.4 didn't have...
  15. A

    ceph performance is really poor

    I have a cluster of 6 nodes, each containing 8x Intel SSDSC2BB016T7R for a total of 48 OSDs. each node has 384GB ram and 40 logical cpus. For some reason, this cluster performance is really low in comparison to other deployments. deploying the gitlab template took well over 5 minutes...
  16. J

    Serious performance and stability problems with Dell Equallogic storage

    Hello, some time ago we switched from vmware to proxmox. Everything was smooth and fine at the beginning and we were very happy. But these days, we have about 120 VMs connected via iSCSI to the network storage that consist of two Dell PSM4110XS. For partitioning we use shared LVM. This type of...
  17. A

    LSI 2208 and TRIM - live without?

    Here is setup I try to do: I have LSI 2208 raid controller with cache and bbu, and 4 SSD disks connected to it. I can not change the firmware, so HBA is not an option and I also don't want to loose cache and bbu. I would like to use RAID10 setup out of these 4 ssds. The probles is that 2208 not...
  18. E

    Increase performance with sched_autogroup_enabled=0

    Changing sched_autogroup_enabled from 1 to 0 makes a HUGE difference in performance on busy Proxmox hosts Also helps to modify sched_migration_cost_ns I've tested this on Proxmox 4.x and 5.x: echo 5000000 > /proc/sys/kernel/sched_migration_cost_ns echo 0 >...
  19. K

    [SOLVED] Estremely poor performance I/O performance after upgrade

    Hi guys, I am struggling since 4.4-12 to 5.1-36 upgrade (to be fair it's a new deploy) due to terrible I/O performances via iSCSI (but after some testing also NFS seems affected). The problem doesn't always show up, but I have been able to reproduce it in this manner: VM just booted up with...
  20. G

    Extremely poor write performance

    I am using latest proxmox 4.4-13 , 3 cluster setup with freenas NAS box with NFS and SMB shares NFS shares for VM disks (mounted on proxmox) On 2 different servers, running 2 different OSes (win 7 and win 10) with different file formats - RAW vs qcow2 and even different controllers (SATA vs...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!