performance

  1. F

    Slow VM on external CEPH Cluster

    Hello all, I have just set up an external CEPH cluster together with an external specialist. This is configured as follows: 16 OSDs 1 pool 32PGs 7.1TiB free storage on 4 nodes with each: 64GB RAM 12 core processors Only NVMe SSDs & normal SSDs Connected in Cluster Net with 10G Connected to...
  2. P

    zfs read performance bottleneck?

    im trying to find out why zfs is pretty slow when it comes to read performance, i have been testing with different systems, disks and seetings testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
  3. R

    Ceph Performance Understanding

    I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each). There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules. The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...
  4. L

    error occured during live-restore: MAX 8 vcpus allowed per VM on this node

    Thanks for the PVE 6.4 release! The Live-Restore feature is especially interesting to me, because I've always looked for ways to make the restore faster in order to keep disaster recovery times a minimum. Situation: Main Node has 16 cores / 32 threads VM 101 has 32 vCPUs, because the database...
  5. A

    ZFS Performance Questions on HDDs

    Hello, I'm running a Server with 2 x 8 TB HDD and 1 x 240GB SSD Drive with the following config. # zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 0 days 22:10:56 with 0 errors on Sun Apr 11 22:34:58 2021 config: NAME STATE...
  6. V

    ZFS: Hohe CPU last

    Hi, Ich habe hier noch ein altes Board mit einem Intel J4005 rumliegen und habe als Testsystem Proxmox draufgehauen. Proxmox selbst läuft auf einer NVMe SSD, zudem habe ich eine weitere SATA SSD und eine HDD als ZFS Single Disk hinzugefügt. Bei einem Test via Samba ist mir aufgefallen, dass...
  7. D

    Proxmox cluster - disk layout for ceph

    Hi, I plan to build my first ceph cluster and have some newbie questions. In the beginning I will start with 5 nodes, and plan to reach 50 nodes. Those nodes quite old (CPU E3,16GB RAM, 2x1Gbps network), so I think to gain the performance in adding more nodes but not upgrading RAM or CPU. I...
  8. DerDanilo

    pbs client backup performance tuning

    I just started testing the pbs backup client for some advanced backup scenarios. One question of course is how to get the maximum performance out of the server that creates backups. In multiple larger infrastructures there are so called 'backupworkers' (VMs) who have plenty of CPU and RAM as...
  9. A

    Windows VM really Bad Memory Performance!

    Hello there lovely people. So, as the title says, Memory Performance is really bad. I tried to debug this since 3 or 4 Weeks now and I´m all out of Ideas. In a Linux VM i get around 24GB/s with 1M BS which is around the maximum my Board/System can handle. I used the Phoronix Test Suite as a...
  10. T

    New to Proxmox/Ceph - performance question

    I am new to Proxmox/Ceph and looking into some performance issues. 5 OSD nodes and 3 Monitor nodes Cluster vlan - 10.111.40.0/24 OSD nod CPU - AMD EPYC 2144G (64 Cores) Memory - 256GB Storage - Dell 3.2TB NVME x 10 Network - 40 GB for Ceph Cluster Network - 1GB for Proxmox mgmt MON nod CPU -...
  11. D

    Allgemeine Fragen

    Hallo zusammen, ich hatte vor kurzen ein Thema über "ZFS Speicher" eröffnet und habe festgestellt, dass es doch einige Kniffe gibt die man bei Proxmox beachten sollte um nicht unnötig Speicher oder Ressourcen zu verschwenden. Danach bin ich mein System einmal durchgegangen und habe ein paar...
  12. S

    PMG Suitability and recommendations for customer / prospect

    Hello everyone, I have open ticket with Support on this, but I also wanted to get some feedback from the PGM Community. We have a customer that is considering using Proxmox Mail Gateway's for their monthly invoice batching. This is mission critical email that needs to go out without fail in...
  13. O

    Very bad iSCSI performance despite no config change

    Hello! I have a small cluster with my VMs living on LVM over iSCSI on a hp msa2050 SAN. I built it back in January and it was nice and quick - didn't even bother with multipath or any tuning because even with 10+ VMs running, it was fast enough. I left it running, doing nothing, until this...
  14. Z

    Very slow SERVER and VMs

    Hello dear community. Server features: https://prnt.sc/s3hnb0 (+ a single 2TB HDD hard drive) Configuration of a default VM: https://prnt.sc/s3hq8c Black screen: https://i.imgur.com/MJUMOrD.png I have started having serious problems on my server since I start having quite a few VMs. I have...
  15. M

    Very poor write speeds with Samsung 750 EVO SSD

    Hi, while working in a VM and installing some stuff, I noticed that disk writes are slower than they used to be in the past when running an OS bare-metal on the same hardware without having Proxmox in between. After taking a deeper look with zpool iostat 2 I saw that the write throughput never...
  16. William Edwards

    File system in VM

    I am trying to decide between using XFS or EXT4 inside KVM VMs. My goal is not to over-optimise in an early stage, but I want to make an informed file system decision and stick with that. Situation: Ceph as backend storage SSD storage Writeback cache on VM disk No LVM inside VM CloudLinux 7...
  17. T

    CPU Performance

    Hi, which processor type should I choose for the Ryzen 9 3900X processor in VM Create page? Performance compared to the processor is very low.
  18. G

    Performance issue with Ceph under Proxmox 6

    Hi community, we have a server cluster consisting of 3 nodes with EPYC 7402P 24-Core CPUs and 6 Intel Enterprise SSDs (4620) and 256GB RAM each. Also we have a 10Gbits NIC for Ceph. SSD performance alone is fine, Jumbo frames are enabled and also iperf gives resonable results in terms of...
  19. mir

    openvswitch vs linux bridge performance

    Anybody here aware of performance comparison tests made lately between openvswitch and linux bridge?
  20. D

    Does Proxmox need fast storage?

    Hi, I have a simple question which I would like to share with because I’m interested in you point of view. On a Proxmox server I have fast storage (SSDs or NVMe) and slow storage (SAS or SATA). My fast choice could be to install Proxmox on fast storage and use it also for storing virtual...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!