The speed of importing virtual machine disk using ceph on Proxmox ve Gui is very slow, how to solve it?

Please provide the output of pveversion -v.

What exactly do you mean by `importing virtual machine disks on GUI`?
 
Please provide the output of pveversion -v.

What exactly do you mean by `importing virtual machine disks on GUI`?
1635477399200.png
l6.com:673 => l6.com:nfs 280KB 299KB 292KB
<= 132MB 125MB 122MB

This is the direct migration of disk mirroring from NFS to rbd in Gui, at a rate of about 150MB/s
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

1635478839471.jpeg
l6.com:nfs => l4.com:716 461MB 391MB 382MB

Using RBD to import disk images can be basically stable at 300MB/s+. The maximum speed can be close to the limit of 510MB/s of SATA hard disks. However, migrating disks on the GUI cannot achieve such speeds and even 300MB/s can not be achieved. Developers are recommended Under my own equipment test, my environment is: 5 physical servers, each 4HDD-1TB, 1SSD wal/DB-512GB, 1SSD cache-800GB
 
proxmox-ve: 7.0-2 (running kernel: 5.11.22-4-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-7
pve-kernel-helper: 7.0-7
pve-kernel-5.11.22-4-pve: 5.11.22-8
ceph: 16.2.6-pve2
ceph-fuse: 16.2.6-pve2
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.3.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-6
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.9-2
proxmox-backup-file-restore: 2.0.9-2
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.3-1
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 
you are comparing apples to oranges here - raw I/O performance vs a live-replication of a disk image currently used by a VM. the latter has a lot of overhead.
 
you are comparing apples to oranges here - raw I/O performance vs a live-replication of a disk image currently used by a VM. the latter has a lot of overhead.
Sorry, you may have misunderstood what I mean, I just want to consult whether there is an optimization method.If other IO consumption is considered, there should be a consumption value, and we should be controllable.
 
it's expected that live-cloning or live-moving a disk is slower than doing the same operation offline, which is likely slower than using some storage-specific tool to copy/import a disk image.
 
it's expected that live-cloning or live-moving a disk is slower than doing the same operation offline, which is likely slower than using some storage-specific tool to copy/import a disk image.
As you said, it's true that you went through conversion halfway, so it's very slow. However, when writing RBD directly, the rate is normal, and the object storage cephfs. The default parameter you give is that you can't store disk images. It's recommended to allow storage, so the performance can be doubled directly. In addition, the performance consumption during PG backfilling is also reduced
1638516526300.png