performance

  1. K

    Benchmark: ZFS vs mdraid + ext4 + qcow2

    After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
  2. H

    PVE kernel limits outgoing connections

    All my servers with PVE kernel enabled show poor outgoing connections performance: ab -n 10000 -c 1000 example.url 4.4.21-1-pve Requests per second: 679.38 4.4.35-1-pve Requests per second: 754.42 4.13.13-5-pve Requests per second: 692.04 4.13.13-6-pve Requests per second...
  3. D

    What is best performing file system for Proxmox 5.2 on 4 x NVMe SSD Drives?

    Good day all, Hardware Specs: Dell PowerEdge R630, Dual (2) Intel Xeon 8 Core E5-2667 v3 CPUs, 3.2 Ghz, 256 Gigabytes Memory, 2x Intel S3610 SSD fo Proxmox OS (raid 1 over PERC H330 SAS RAID Controller), 4x Intel P4510 series 1 Terabyte U.2 format NVMe SSD (VM storage), front 4 bays configured...
  4. P

    VMs auf anderer Node einspielen

    Hallo zusammen, ich habe eine Frage bezüglich der Performance von VMs. Aktuell laufen ca. sechs Linux KVMs und zwei FreeBSD KVMs auf einer Node mit zwei HDD Platten im Raidverbund. Da beabsichtigt ist in nächster Zeit eine performancemäßig bessere Node zu beziehen, mitunter SSDs im...
  5. L

    Ceph performance

    I need some input on tuning performance on a new cluster I have setup The new cluster has 2 pool (one for HDDs and one for SSDs). For now it's only three nodes. I have separate networks: 1 x 1Gb/s NIC for corosync, 2 x bonded 1GB/s NICs for ceph and 1 x 1GB/s NIC for the Proxmox bridged VM's...
  6. S

    Ceph low performance (especially 4k)

    Hello, We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using. So is there any way we can improve with configuration changes...
  7. V

    ZFS pools on NVME

    Hello! I've read a lot of threads about people trying to get full speed from their nvme drives. So i'm wondering how does nvme tuning looks on zfs? I have several PVE appliances with nvme mirrors and now i suspect that they don't reach full speeds.
  8. K

    Non-Linux guest performance degradation after upgrade 4.4 -> 5.2

    Hi all, After upgrade from discontinued PVE 4.4 to 5.2 I faced quite an unpleasant fact, that all my OpenBSD guests have been slowed down dramatically... After some research I found, that Spectre-related patches can affect performance (does that mean, that last pve-kernel for 4.4 didn't have...
  9. A

    ceph performance is really poor

    I have a cluster of 6 nodes, each containing 8x Intel SSDSC2BB016T7R for a total of 48 OSDs. each node has 384GB ram and 40 logical cpus. For some reason, this cluster performance is really low in comparison to other deployments. deploying the gitlab template took well over 5 minutes...
  10. J

    Serious performance and stability problems with Dell Equallogic storage

    Hello, some time ago we switched from vmware to proxmox. Everything was smooth and fine at the beginning and we were very happy. But these days, we have about 120 VMs connected via iSCSI to the network storage that consist of two Dell PSM4110XS. For partitioning we use shared LVM. This type of...
  11. A

    LSI 2208 and TRIM - live without?

    Here is setup I try to do: I have LSI 2208 raid controller with cache and bbu, and 4 SSD disks connected to it. I can not change the firmware, so HBA is not an option and I also don't want to loose cache and bbu. I would like to use RAID10 setup out of these 4 ssds. The probles is that 2208 not...
  12. E

    Increase performance with sched_autogroup_enabled=0

    Changing sched_autogroup_enabled from 1 to 0 makes a HUGE difference in performance on busy Proxmox hosts Also helps to modify sched_migration_cost_ns I've tested this on Proxmox 4.x and 5.x: echo 5000000 > /proc/sys/kernel/sched_migration_cost_ns echo 0 >...
  13. K

    [SOLVED] Estremely poor performance I/O performance after upgrade

    Hi guys, I am struggling since 4.4-12 to 5.1-36 upgrade (to be fair it's a new deploy) due to terrible I/O performances via iSCSI (but after some testing also NFS seems affected). The problem doesn't always show up, but I have been able to reproduce it in this manner: VM just booted up with...
  14. G

    Extremely poor write performance

    I am using latest proxmox 4.4-13 , 3 cluster setup with freenas NAS box with NFS and SMB shares NFS shares for VM disks (mounted on proxmox) On 2 different servers, running 2 different OSes (win 7 and win 10) with different file formats - RAW vs qcow2 and even different controllers (SATA vs...
  15. C

    when we start copy or clone UI become unresponsive

    Hi we have installed Proxmox VE 4.4 , RAID 1 10TB, enabled Cache we need to move the VMs KVM based raw files from other server to Proxmox, so we have copied those VMs to External HDD and connected though USB3.0 and mounted to proxmox server. we create the VMs though GUI and i start replaceing...
  16. L

    latencytop won't start

    Hello, I would like to run latencytop but something is missing: I'm running proxmox 4.4 with latest updates applied:
  17. C

    Hardware/Concept for Ceph Cluster

    Hello, we are using Proxmox with local storage on a small 3 node cluster. Now we are planning to set up a 4 node cluster with network storage (Ceph), live migration, HA and snapshot functionality. We already have some hardware laying around from dev projects. Now I would like to get some ideas...
  18. F

    Is Ceph too slow and how to optimize it?

    The setup is 3 clustered Proxmox for computations, 3 clustered Ceph storage nodes, ceph01 8*150GB ssds (1 used for OS, 7 for storage) ceph02 8*150GB ssds (1 used for OS, 7 for storage) ceph03 8*250GB ssds (1 used for OS, 7 for storage) When I create a VM on proxmox node using ceph storage, I...
  19. C

    Add Hard Disk "Storage Selector" times out

    I am having difficulty when adding a new disk to a vm where the storage selector dropdown is timing out. I assume this is from I/O degradation so the dropdown can't get the list of available NFS or iSCI mounts. But I can't ascertain the cause. The NAS is connected via quad gigabit bond and the...
  20. mir

    pve 4.2 and iothread

    Hi all, I have done some IO tests on proxmox 4.2 with iothread. Results below: fio --description="Emulation of Intel IOmeter File Server Access Pattern" --name=iometer --bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10 --rw=randrw --rwmixread=80 --direct=1 --size=4g --ioengine=libaio...