performance

  1. [SOLVED] SSD performance issue on ext4

    Hi All, I installed proxmox (using ext4 filesystem) on a 5212MB SATA SSD (used consumer-grade Sandisk SD8SB8U512G1001) to evaluate it. I run some benchmarks using fio, to get an idea of how fast it is. Before installing proxmox, I benchmarked the disk with PassMark's Performance Test on Win1o...
  2. jonasl

    Slow performance on USB ports

    I'm having really poor performance when copying data from one physical USB-C disk to another physical USB-A disk plugged into the same Proxmox VE-system. The rsync below is only at 0.85kB/s. Filesystems are both ext4 and the disks are created as VM-disks: directories and raw. In the VM doing...
  3. Best performance

    Hi, I want to use the best performance configuration. I need help to find out which VM configuration is the best for me. The best performance means the drivers use low CPU time. I will use local storage with a separate disk. Which file system is best for sata qcow2 virtual machines? All VMs...
  4. [SOLVED] going for max speed with proxmox 7; how to do it?

    I'd like to setup proxmox 7 to be as fast as it can possibly be with the hardware I have and that I am considering to get; EDIT: (this will be for non critical, non server related workloads) Edit 2: I would like to have a dedicated VM to pass GPU's to so that I can donate to the folding at home...
  5. VM CPU Performance

    Hi, I've noticed that the CPU performance of the VMs is well below what it should be. I have a newer one that has a CPU with less performance than this newer one, and the VM on geekbench delivers what it should deliver. A VPS on the newer one has less performance than the previous generation...
  6. IO performance issue with vzdump since upgrade to version 7

    Since updating to Proxmox version 7, we have not been able to backup a single VM (Production affected) The issue is that VZDUMP slows down and VM's on the Proxmox nodes become unresponsive. I was able to get 450MB transfer using rsync to the VM locally to test access to the vm drives which go...
  7. Schlechte Random Read/Write Performance Raid10 ZFS 4x WD Black SN750 500 GB

    Hallo Leute, ich bin komplett neu in die Proxmox Materie eingetaucht. Bisher hatte ich einen Home Server mit Hyper V am laufen und bin nun auf Proxmox umgestiegen. Da ich nur einen kleinen 2HE Mini Server habe, habe ich 4 Nvmes verwendet (WD Black SN750 mit 500 GB PCIe 3.0x4) und auf diesen...
  8. Slow Performance on Server 2016

    I'm having severe slowness on a VM with Server 2016. I'm not familiar with Proxmox so any advice would help. The VM is running SQL & RemoteApp. There are about 30 users RDPing into this server and I've thrown all the resources I can at it. The server and the programs on the server just crawl...
  9. DynFi User

    File level backup for 100.000.000 files

    We have a large volume that we need to backup which contains 100.000.000 files, with a ∆ / day of about 50.000 files (400GB). For the time being this file system is mounted directly in PBS using fuse kernel driver with mount -t ceph ip.srv.1,ip.srv.2,ip.srv.3,ipsrv.4:/ /mnt/mycephfs -o...
  10. Good Practice for Home Server

    Hi, I'm fairly new to Proxmox and Linux, so please excuse my noobiness. Objective Trying to move away from a MacMini hosting SMB shares (the crooked Apple way), TimeMachines and running some Debian/Windows VMs via Virtualbox for Homelab stuff. Moving towards a "real" (home) server with Debian...
  11. Slow VM on external CEPH Cluster

    Hello all, I have just set up an external CEPH cluster together with an external specialist. This is configured as follows: 16 OSDs 1 pool 32PGs 7.1TiB free storage on 4 nodes with each: 64GB RAM 12 core processors Only NVMe SSDs & normal SSDs Connected in Cluster Net with 10G Connected to...
  12. zfs read performance bottleneck?

    im trying to find out why zfs is pretty slow when it comes to read performance, i have been testing with different systems, disks and seetings testing directly on the disk im able to achieve some reasonable numbers not far away from specsheet => 400-650k IOPS (p4510 and some samsung based HPE)...
  13. Ceph Performance Understanding

    I setup a Proxmox cluster with 3 servers (Intel Xeon E5-2673 and 192 GB RAM each). There are 2 Ceph Pools configured on them and separated into a NVMe- and a SSD-Pool through crush rules. The public_network is using a dedicated 10 GBit network while the cluster_network is using a dedicated 40...
  14. error occured during live-restore: MAX 8 vcpus allowed per VM on this node

    Thanks for the PVE 6.4 release! The Live-Restore feature is especially interesting to me, because I've always looked for ways to make the restore faster in order to keep disaster recovery times a minimum. Situation: Main Node has 16 cores / 32 threads VM 101 has 32 vCPUs, because the database...
  15. ZFS Performance Questions on HDDs

    Hello, I'm running a Server with 2 x 8 TB HDD and 1 x 240GB SSD Drive with the following config. # zpool status pool: rpool state: ONLINE scan: scrub repaired 0B in 0 days 22:10:56 with 0 errors on Sun Apr 11 22:34:58 2021 config: NAME STATE...
  16. ZFS: Hohe CPU last

    Hi, Ich habe hier noch ein altes Board mit einem Intel J4005 rumliegen und habe als Testsystem Proxmox draufgehauen. Proxmox selbst läuft auf einer NVMe SSD, zudem habe ich eine weitere SATA SSD und eine HDD als ZFS Single Disk hinzugefügt. Bei einem Test via Samba ist mir aufgefallen, dass...
  17. Proxmox cluster - disk layout for ceph

    Hi, I plan to build my first ceph cluster and have some newbie questions. In the beginning I will start with 5 nodes, and plan to reach 50 nodes. Those nodes quite old (CPU E3,16GB RAM, 2x1Gbps network), so I think to gain the performance in adding more nodes but not upgrading RAM or CPU. I...
  18. DerDanilo

    pbs client backup performance tuning

    I just started testing the pbs backup client for some advanced backup scenarios. One question of course is how to get the maximum performance out of the server that creates backups. In multiple larger infrastructures there are so called 'backupworkers' (VMs) who have plenty of CPU and RAM as...
  19. Windows VM really Bad Memory Performance!

    Hello there lovely people. So, as the title says, Memory Performance is really bad. I tried to debug this since 3 or 4 Weeks now and I´m all out of Ideas. In a Linux VM i get around 24GB/s with 1M BS which is around the maximum my Board/System can handle. I used the Phoronix Test Suite as a...
  20. New to Proxmox/Ceph - performance question

    I am new to Proxmox/Ceph and looking into some performance issues. 5 OSD nodes and 3 Monitor nodes Cluster vlan - 10.111.40.0/24 OSD nod CPU - AMD EPYC 2144G (64 Cores) Memory - 256GB Storage - Dell 3.2TB NVME x 10 Network - 40 GB for Ceph Cluster Network - 1GB for Proxmox mgmt MON nod CPU -...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!