Clone VM on ZFS Raid10 very slow

Currently looking to setup Proxmox on a similar system, and am wondering if you‘ve made any progress?
Yes, we does.

You have to set the NVMe polling queues to the number of your disks than set the io_poll_delay to hybrid mode.
Code:
module/nvme/parameters/poll_queues =    
block/nvme0n1/queue/io_poll_delay = 0
These are the settings for the NVMe subsystem.

Also, ensure that you have not overcommitted PCIe lanes to the NVMe drives.

For the ZFS parameter have to try and play around which values are correct.

here is the list of the parameters.
Code:
zfs_vdev_sync_write_max_active 
zfs_vdev_sync_write_min_active 
zfs_vdev_async_write_max_active   
zfs_vdev_async_write_min_active   
zfs_vdev_sync_read_max_active   
zfs_vdev_sync_read_min_active   
zfs_vdev_async_read_max_active   
zfs_vdev_async_read_min_active   
zfs_vdev_removal_max_active   
zfs_vdev_removal_min_active   
zvol_threads       
zfs_compressed_arc_enabled=0   
zfs_arc_max
zfs_arc_min
zfs_arc_meta_limit_percent
spl_taskq_thread_dynamic=0

important is to disable compression on the pool too.
If this is no done the module setting does not work.
Compression with such fast devices will cost to much Memory bandwidth.

Code:
zfs set compression = off <pool>
 
  • Like
Reactions: ThinkAgain
Just for curiosity and forgive my ignorance, what do you mean with "directly connected"?
This has nothing with a cable to do.
Ther are PCIe switches that can multiplex for instance 4 lanes to 16 PCIe lanes.
But this is an overcommitment and can make problems.
All NVMe devices got their own U2 connector at this multiplexed Setups.
 
Yes, we does.

You have to set the NVMe polling queues to the number of your disks than set the io_poll_delay to hybrid mode.
Code:
module/nvme/parameters/poll_queues =   
block/nvme0n1/queue/io_poll_delay = 0

important is to disable compression on the pool too.
If this is no done the module setting does not work.
Compression with such fast devices will cost to much Memory bandwidth.

Code:
zfs set compression = off <pool>

forgive me resurrecting this threat but iam basically in the same boat.
almost identical dell / epyc, running 2tb microns 8x in one pool.

now the wierd thing is with compression off my cpu load goes over 50% on benchmarks within the VM. while it stays down at around 30-35% with compression on. even better with compression the time on CPU spikes is much much shorter

also it seems that best performing volblocksize on 8x raid10 seems to be 16k
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!