Recent content by JonathonFS

  1. J

    Boot Probleme - OPNsense -not a bootable disk

    When downloading OPNSense, the image type defaults to vga, which is for creating a USB boot drive. This downloads an img file, which looks like what you have. I made the same mistake. Instead, select image type dvd to download a bootable iso file. Of course, this needs to be extracted as others...
  2. J

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    I would definitely like to get smarter on this subject, because I observed behavior similar to what you're saying here. When looking at the VM summary, I can see the 2-3 GB of savings. But when looking at the PVE node summary, the RAM savings don't translate. I thought this might be ZFS eating...
  3. J

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    I upgraded to PVE 7.4-3, enabled persistent L2ARC and IO threads. With no more disk IO bottleneck, I could see some decent CPU utilization. I just had to conduct every test twice to ensure repeatability now that L2ARC is persistent. All 10 VMs were done booting and benchmarking after about 5...
  4. J

    [SOLVED] Hyper-Threading vs No Hyper-Threading; Fixed vs Variable Memory

    I benchmarked the boot up time of 15 Win10 VMs, to include auto-login and a small powershell workload test script that records benchmark completion time for each VM. The VMs are set to auto-boot and the PVE node is rebooted to kick off each test. The benchmark completion times of all 15 VMs are...
  5. J

    Failed to start Import ZFS pool

    I'm having the "Failed to start Import ZFS pool [pool]" issue on 3 of our nodes. ZFS works fine, but the error is disconcerting. Here's what I found after some testing on a PVE 7.0-11 node that hadn't had ZFS setup on it before. I'm haven't checked PVE 7.2, so not sure if this is still relevant...
  6. J

    Question Marks on Nodes and VMs

    Just had this issue in PVE 7.0-11. I added some SSDs with 520 byte blocks. The pvestatd service was still running, and restarting it did nothing. Once the block size was changed to 4k, the gray question mark went away several minutes later. Here's some commands I found helpful when...
  7. J

    Does a zvol benefit from a Metadata Special device?

    @tonci If there's no bonding, then this shouldn't be a network path limitation issue. If you're restoring multiple VMs to the same ZFS pool and getting full link speed, then the bottleneck isn't with the destination storage volume (it can clearly take it!). Here's some ideas, but you may have...
  8. J

    Does a zvol benefit from a Metadata Special device?

    When you're restoring multiple VMs at once, are you restoring to the same PVE node and to the same zpool? Also, are you using any kind of link aggregation or NIC bonding?
  9. J

    Does a zvol benefit from a Metadata Special device?

    (EDIT: Changed "primarycache" to "secondarycache", based on input from following post by Dunuin) Great way of putting it, thanks! I didn't even know about the "secondarycache=metadata" option. Thanks! It seems like a good intermediate solution. Since it's just a cache, we don't need to worry...
  10. J

    Does a zvol benefit from a Metadata Special device?

    I've been reading up on ZFS performance enhancements. We currently use ZFS on PVE to store the VM disks. It's my understanding that each VM is stored in a zvol. Looking at ways to improve VM performance, it seems an SLOG cache will help with writes. Our Read speed is good enough, so I'm not...
  11. J

    [SOLVED] Unable to Allocate Image on GlusterFS

    Throwing this solution up in case someone else runs into the issue. Scroll down to the bottom for the key Take Aways and dev recommendations. Environment 3x PVE 7.0-11 nodes clustered together Every node has a ZFS pool with a GlusterFS brick on it Glusterd version 9.2 Gluster is configured in a...