out of curiosity, is this still an issue?, is there a device name filter that can be adjusted?.
multipath for physical SAS redundancy is a must (multipath JBOD cages)
now it's clear. Evidently the naming is odd.
Now that you mention how Proxmox just manipulates the provisioning, I wonder how difficult would it be to implement something like VVols commanding a storage vía API or CLI. That should enable shared disks vía LUN sharing, and even allowing advanced...
I wonder why ZFS is listed as not shared storage, but ZFS+iSCSI is explicitly listed as supported for shared storage.
So drivers for major HBA vendors should be there, but it will be limited to LVM backend?
Would like into this once more. Is there any healthcheck between nodes possible?. Right now I see 0 packets for port 4789 between nodes. Also, the virtual network works on a single node, once I move a VM to the second node, the connectivity is lost :/
can VirGL be used with the nvidia provided binary driver?. I lost VirGL support after installing NVIDIA-Linux-x86_64-525.85.07-vgpu-kvm-custom.run:
Error:
TASK ERROR: no DRM render node detected (/dev/dri/renderD*), no GPU? - needed for 'virtio-gl' display
Modules:
root@bigiron:~# ls -l...
Checking the documentation, it seems I won't have compression without ZFS. Thin provisioning should be about the same with LVM-Thin vs XFS+QCOW2, any insights on performance while using snapshots on LVM vs QCOW2?
Hello!,
I've used ZFS data stores in the past with plain SAS HBAs before and have been pretty happy with it, handling protection & compression.
For a given setup, I have:
- OS boot: LSI SAS HBA + 1 x KINGSTON SA400S3 + 1 x SanDisk SD5SB2-1 + ZFS Mirror.
- Datastore storage: RAID controller...
it's a remote location, I can't change the connectivity. Will look into de IPSEC alternative, the DCN team assures the switches shouldn't mess with the VXLAN traffic on the switches.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.