Bad Disk Performance

sebastianke

New Member
Apr 8, 2025
7
1
3
Hi,
following setup:
12x AMD Ryzen 5 3600X 6Core
32GB Ram
Boot: zfs mirror 2x 120GB Sandisk
VM storage: M.2 NVME 2 TB LVM
VM Storage 2: SanDisk 1 TB LVM
Gigabit ethernet

Problem:
Slow Disk Performance at Storage Migrarion or Backup.
Backup write speeds around 10MB/s
Storage migration 5 hours for 500GB

IO Delay is at 25% white backup and 80% while storage migration.
 
Hi,

first of, please provide the output of pveversion -v (in codetags please!).

VM storage: M.2 NVME 2 TB LVM
VM Storage 2: SanDisk 1 TB LVM
you don't tell anything about the exact models of this disks.
Are these enterprise-class SSDs? Have you checked the SMART health of these disks?

Slow Disk Performance at Storage Migrarion or Backup.
Backup write speeds around 10MB/s
Storage migration 5 hours for 500GB
Whats your backup configuration exactly? vzdump backup? Or to PBS on another machine, for example? Is it over the network?
Same for storage migration. Do you mean migration between those two LVM storages? Do you use thick or thin LVM? etc.
 
  • Like
Reactions: sebastianke
Code:
oot@pve:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-9-pve)
pve-manager: 8.3.5 (running version: 8.3.5/dac3aa88bac3f300)
proxmox-kernel-helper: 8.1.1
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.8: 6.8.12-9
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
proxmox-kernel-6.8.12-8-pve-signed: 6.8.12-8
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve1
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.1
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.0
libpve-cluster-perl: 8.1.0
libpve-common-perl: 8.3.0
libpve-guest-common-perl: 5.2.0
libpve-http-server-perl: 5.2.0
libpve-network-perl: 0.10.1
libpve-rs-perl: 0.9.3
libpve-storage-perl: 8.3.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.3.7-1
proxmox-backup-file-restore: 3.3.7-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.3.8
pve-cluster: 8.1.0
pve-container: 5.2.5
pve-docs: 8.3.1
pve-edk2-firmware: 4.2025.02-3
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.15-3
pve-ha-manager: 4.0.6
pve-i18n: 3.4.1
pve-qemu-kvm: 9.2.0-5
pve-xtermjs: 5.5.0-1
qemu-server: 8.3.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve2
root@pve:~#


First ssd is ct2000p3pssd8
Second is sandisk ssd plus 1tb
Boot mirror: sandisk sd6sb1m-128g-1006 and sandisk sd7sb6s-128g-1006

All thick LVM
SMART is everything ok
Any storage Operation is slow. Local or over network. Does not matter. Backup is vzdump without pbs
 
Last edited:
First ssd is ct2000p3pssd8
That one uses QLC flash memory, which can become slower than an old HDD when doing sustained writes. There have been previous problems reported with it (but it might be with ZFS).
Second is sandisk ssd plus 1tb
I'm not sure about that old model
Boot mirror: sandisk sd6sb1m-128g-1006 and sandisk sd7sb6s-128g-1006
Those are pretty old models but at least MLC (no experience at all).
 
  • Like
Reactions: sebastianke
  • Like
Reactions: sebastianke
Enterprise drives with PLP. You can use the same search for QLC on this forum to find lots of recommendations (which always follow after people accepts the disappointment).
Would i bee okay for performance increace to use ssd with TLC cache?
Does it need to be PLP?
eg. Samsung 990 Pro
 
Does it need to be PLP?
To be safe and performant: YES

That said...
eg. Samsung 990 Pro
... I am using them too. I know their disadvantages and I accept the result. I won't come here to the forum and ask for help if/when one (or both!) of them dies. And I will never recommend storage w/o PLP.

My point is: it is your decision. We just try hard to tell everybody that there may be consequences of bad decisions.
 
  • Like
Reactions: sebastianke
Thanks for your reply. I blught now a. Micron 5300 Pro 1.92TB 2.5" SATA III Server / Enterprise SSD
And when it is performant i will backup daily to my of site nas.
So i am safe :)
Tanks guys.
 
  • Like
Reactions: UdoB