Hi, I have a 2 node cluster that is drbd connected. I want to offline transfer an old raw disk to a disk on the same node but this process is very slow. I only get around 2.8MB/s making a 192G disk move a tedious task.
Also I see >15% IO delay on the node when the qemu-img command is running.
The move I am doing is from a hardware RAID1 with 2 HGST WD Ultrastar HUS726T4TALE6L4 4TB 7200 RPM 512e SATA 6Gb/s to a single HGST WD Ultrastar HUS726T4TALE6L4 4TB 7200 RPM 512e SATA 6Gb/s
The RAID controller is a LSI MegaRAID 9341-4i
I also tried the move with all guests off but this did not make a difference.
Package versions
/etc/pve/storage.cfg
qm config 100
Also I see >15% IO delay on the node when the qemu-img command is running.
The move I am doing is from a hardware RAID1 with 2 HGST WD Ultrastar HUS726T4TALE6L4 4TB 7200 RPM 512e SATA 6Gb/s to a single HGST WD Ultrastar HUS726T4TALE6L4 4TB 7200 RPM 512e SATA 6Gb/s
The RAID controller is a LSI MegaRAID 9341-4i
I also tried the move with all guests off but this did not make a difference.
Package versions
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-7
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
proxmox-kernel-6.2.16-18-pve: 6.2.16-18
pve-kernel-5.15.126-1-pve: 5.15.126-1
pve-kernel-5.15.116-1-pve: 5.15.116-1
pve-kernel-5.15.108-1-pve: 5.15.108-2
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.2
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.4
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.4
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2
/etc/pve/storage.cfg
Code:
dir: local
path /var/lib/vz
content vztmpl,iso,backup
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
lvm: drbdlvm
vgname REDACTED
content images,rootdir
shared 1
lvm: backup
vgname backupvg
content images,rootdir
shared 0
qm config 100
Code:
boot: order=scsi0;ide2
cores: 4
ide2: none,media=cdrom
memory: 16384
meta: creation-qemu=7.0.0,ctime=1668077080
name: Ver6.10
net0: virtio=2A:7B:5F:91:FE:F6,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: drbdlvm:vm-100-disk-0,size=192G
scsihw: pvscsi
smbios1: uuid=dda29420-a1d8-424e-8688-4e1933cb6bb8
sockets: 1
vmgenid: cc55f130-786d-42d8-9b81-bd0e037c23bc
Last edited: