The admonition against using gamer cards is both because they are usually very poorly provisioned with VRAM, AND mechanically not suitable for cooling in a server chassis. As long as OPs models fit in the vram (which on a 5090 is a not...
my guess is that you have a failed disk in your volumeset, and its the one that has the boot partition.
You can use something like sysrescue livecd with zfs to boot the system and reassemble the volume from your survivors. If you can do that...
You are giving up much of what makes a Nimble useful and valuable using it this way. if you map the LUNs to guests directly instead, you'll be able to gain all the snapshotting functionality as designed. just be aware you'll need to orchestrate...
I'm surmising based on your zpool status output. you can and would see the events and when with something like
journalctl -p err | grep sd
but in a production environment you'd want to trap and alert this in real time using an external health...
I dont know what you're asking. Zpool status is telling you how to "fix it."
As in, its no longer fixable.
The lessons to draw from this are as follows:
1. If you are running this on a system without ECC- dont.
2. RAIDZ1 is a no no. You had a...
can you verify that iqn...c132 is NOT the same iscsi target lun as used by vg_iscsi?
and as bbgeek pointed out you cant use lvm-thin for shared storage.
Either of @bbgeek17 's suggestions would result in storage accessible to the entire cluster. In your example above, you can only map the LUNs you created to a single vm at a time, but that vm would be accessible over any cluster member that has...
I would have believed it should just work...
oh well- this is probably a job for terraform. Since you are predeploying the vm-tools and virtio-net drivers, you can obtain the vm's new address from the proxmox api, feed that to terraform to...
do you have a central means of C&C to your windows guests?
if so, you can preinstall the virtio drivers/qemu guest agent ahead of migration. If you dont.... well maybe you should start there.
In my implementations of similar, I use this approach almost exclusively because It allows me to use the native snapshot facilities of the storage in a way that can be actually usable, since I can remap snapshots as targets and replace the target...
Everything is relative. It is a fact of life that the more "9s" you seek the higher the cost goes up, exponentially.
Cant guess my what you are referring to.
Possibly, but at smaller node/osd counts- no.
Plenty. but you're looking at it the...
You need to seperate, logically, the compute cluster from the storage cluster, even if you intend to implement them hyperconverged (on the same hardware.)
The compute cluster functions as you expect- when a node is down or fails, the workload...