performance

  1. A

    ceph performance is really poor

    I have a cluster of 6 nodes, each containing 8x Intel SSDSC2BB016T7R for a total of 48 OSDs. each node has 384GB ram and 40 logical cpus. For some reason, this cluster performance is really low in comparison to other deployments. deploying the gitlab template took well over 5 minutes...
  2. J

    Serious performance and stability problems with Dell Equallogic storage

    Hello, some time ago we switched from vmware to proxmox. Everything was smooth and fine at the beginning and we were very happy. But these days, we have about 120 VMs connected via iSCSI to the network storage that consist of two Dell PSM4110XS. For partitioning we use shared LVM. This type of...
  3. A

    LSI 2208 and TRIM - live without?

    Here is setup I try to do: I have LSI 2208 raid controller with cache and bbu, and 4 SSD disks connected to it. I can not change the firmware, so HBA is not an option and I also don't want to loose cache and bbu. I would like to use RAID10 setup out of these 4 ssds. The probles is that 2208 not...
  4. E

    Increase performance with sched_autogroup_enabled=0

    Changing sched_autogroup_enabled from 1 to 0 makes a HUGE difference in performance on busy Proxmox hosts Also helps to modify sched_migration_cost_ns I've tested this on Proxmox 4.x and 5.x: echo 5000000 > /proc/sys/kernel/sched_migration_cost_ns echo 0 >...
  5. K

    [SOLVED] Estremely poor performance I/O performance after upgrade

    Hi guys, I am struggling since 4.4-12 to 5.1-36 upgrade (to be fair it's a new deploy) due to terrible I/O performances via iSCSI (but after some testing also NFS seems affected). The problem doesn't always show up, but I have been able to reproduce it in this manner: VM just booted up with...
  6. G

    Extremely poor write performance

    I am using latest proxmox 4.4-13 , 3 cluster setup with freenas NAS box with NFS and SMB shares NFS shares for VM disks (mounted on proxmox) On 2 different servers, running 2 different OSes (win 7 and win 10) with different file formats - RAW vs qcow2 and even different controllers (SATA vs...
  7. C

    when we start copy or clone UI become unresponsive

    Hi we have installed Proxmox VE 4.4 , RAID 1 10TB, enabled Cache we need to move the VMs KVM based raw files from other server to Proxmox, so we have copied those VMs to External HDD and connected though USB3.0 and mounted to proxmox server. we create the VMs though GUI and i start replaceing...
  8. L

    latencytop won't start

    Hello, I would like to run latencytop but something is missing: I'm running proxmox 4.4 with latest updates applied:
  9. C

    Hardware/Concept for Ceph Cluster

    Hello, we are using Proxmox with local storage on a small 3 node cluster. Now we are planning to set up a 4 node cluster with network storage (Ceph), live migration, HA and snapshot functionality. We already have some hardware laying around from dev projects. Now I would like to get some ideas...
  10. F

    Is Ceph too slow and how to optimize it?

    The setup is 3 clustered Proxmox for computations, 3 clustered Ceph storage nodes, ceph01 8*150GB ssds (1 used for OS, 7 for storage) ceph02 8*150GB ssds (1 used for OS, 7 for storage) ceph03 8*250GB ssds (1 used for OS, 7 for storage) When I create a VM on proxmox node using ceph storage, I...
  11. C

    Add Hard Disk "Storage Selector" times out

    I am having difficulty when adding a new disk to a vm where the storage selector dropdown is timing out. I assume this is from I/O degradation so the dropdown can't get the list of available NFS or iSCI mounts. But I can't ascertain the cause. The NAS is connected via quad gigabit bond and the...
  12. mir

    pve 4.2 and iothread

    Hi all, I have done some IO tests on proxmox 4.2 with iothread. Results below: fio --description="Emulation of Intel IOmeter File Server Access Pattern" --name=iometer --bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10 --rw=randrw --rwmixread=80 --direct=1 --size=4g --ioengine=libaio...
  13. stefws

    4.2 perf is lower then 4.1

    Upgraded 4 of 7 nodes today only to discover than especially two VMs running (Palo Alto - VM200 FWs) use much more CPU than when on pve 4.1 :( Pic 1 here shows VM usage last 24 hour and the jump when migrated onto 4.2.22 around 17:00, the last high jump is me introducing more load on the FW...
  14. L

    Slower Snapshot-Mode Backups after moving from 3.0 to 4.1

    In Proxmox 3.0, my daily snapshot-mode-backup took 37 minutes to create a 78gig backup file. After installing Proxmox 4.1, this same virtual machine now takes an hour and 40 minutes (1:40:00) to create the same 78gig backup file. When I backup the same virtual machine using stop-mode backup, it...
  15. E

    CEPH read performance

    7 mechanical disks in each node using xfs 3 nodes so 21 OSDs total I've started moving journals to SSD which is only helping write performance. CEPH nodes still running Proxmox 3.x I have client nodes running 4.x and 3.x, both have the same issue. Using 10G IPoIB, separate public/private...
  16. S

    Poor performance on NFS storage

    I'm running 3 proxmox 3.4 nodes using NFS shared storage with a dedicated 1GB network switch. root@lnxvt10:~# pveversion pve-manager/3.4-11/6502936f (running kernel: 2.6.32-43-pve) root@lnxvt10:~# mount | grep 192.168.100.200 192.168.100.200:/mnt/volume0-zr2/proxmox1/ on...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!