I would definitely like to get smarter on this subject, because I observed behavior similar to what you're saying here. When looking at the VM summary, I can see the 2-3 GB of savings. But when looking at the PVE node summary, the RAM savings don't translate. I thought this might be ZFS eating...
I upgraded to PVE 7.4-3, enabled persistent L2ARC and IO threads. With no more disk IO bottleneck, I could see some decent CPU utilization. I just had to conduct every test twice to ensure repeatability now that L2ARC is persistent. All 10 VMs were done booting and benchmarking after about 5...
I benchmarked the boot up time of 15 Win10 VMs, to include auto-login and a small powershell workload test script that records benchmark completion time for each VM. The VMs are set to auto-boot and the PVE node is rebooted to kick off each test. The benchmark completion times of all 15 VMs are...
I'm having the "Failed to start Import ZFS pool [pool]" issue on 3 of our nodes. ZFS works fine, but the error is disconcerting. Here's what I found after some testing on a PVE 7.0-11 node that hadn't had ZFS setup on it before. I'm haven't checked PVE 7.2, so not sure if this is still relevant...
Just had this issue in PVE 7.0-11. I added some SSDs with 520 byte blocks. The pvestatd service was still running, and restarting it did nothing. Once the block size was changed to 4k, the gray question mark went away several minutes later. Here's some commands I found helpful when...
@tonci
If there's no bonding, then this shouldn't be a network path limitation issue. If you're restoring multiple VMs to the same ZFS pool and getting full link speed, then the bottleneck isn't with the destination storage volume (it can clearly take it!).
Here's some ideas, but you may have...
When you're restoring multiple VMs at once, are you restoring to the same PVE node and to the same zpool? Also, are you using any kind of link aggregation or NIC bonding?
(EDIT: Changed "primarycache" to "secondarycache", based on input from following post by Dunuin)
Great way of putting it, thanks!
I didn't even know about the "secondarycache=metadata" option. Thanks! It seems like a good intermediate solution. Since it's just a cache, we don't need to worry...
I've been reading up on ZFS performance enhancements. We currently use ZFS on PVE to store the VM disks. It's my understanding that each VM is stored in a zvol.
Looking at ways to improve VM performance, it seems an SLOG cache will help with writes. Our Read speed is good enough, so I'm not...
Throwing this solution up in case someone else runs into the issue. Scroll down to the bottom for the key Take Aways and dev recommendations.
Environment
3x PVE 7.0-11 nodes clustered together
Every node has a ZFS pool with a GlusterFS brick on it
Glusterd version 9.2
Gluster is configured in a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.