manually unlocking is what keeps this a non production ready setup. if you're serious, consider tang/clevis or other auto-unlocking mechanisms. manual intervention should only be necessary for disaster recovery.
You can absolutely do this with ZFS, I wouldn't use the RAIDZ to host VMs (os + applications) though:
https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/
This is especially true if they happen to be hdds since rotating...
Veeam doesnt work the same way on PVE as it does on vmware, so you DONT actually need snapshot support for functionality- nor does veeam even use hardware snapshots if available.
I was really excited when veeam started quietly testing pve...
Please share benchmarks. Until then, I am guessing that you eventually got the vm to address the socket housing the PCI link to the NIC. generally speaking, adding sockets does no harm AT BEST- and usually slows down the machine by introducing...
and my reply was to @AceBandge who asked
OPs question has already been answered afaict. if its not clear- use clonezilla. in the list of priorities for the devs to follow I'd much rather them add useful features or squash bugs then add a tool...
since you're able to access the webui, the issue is almost certainly incorrect thumbprint. delete and recreate the pbs datastore.
One other thing- 192.29.72.4 is a real IP address. if you are using this subnet privately- dont. the allowed...
how many mgr daemons do you have? are they running different versions of ceph?
You only need one mgr daemon. and make sure its on the same version as your monitors.
I suppose the real question is- why are you migrating a VM from your "production" group members to the "backup" group?
if its for backup, you can and should use PBS for the purpose.
If you were interested in actually running that workload on a...
Yeah I get you, but he has "MD3200" luns to all 5 nodes. if he's not mapping some to all WWNs thats by choice, not by limitation. I suppose those could be different physical devices.
When you migrate from pve2 to 6, you will need to specify store on the destination- but why do you limit node access when all 5 nodes can see the storage?
Hi @anthony1956 , welcome to the forum.
While I sympathize with your situation and your request, we have all been there, what you are proposing is essentially a "knee-jerk" reaction and has very little chance of being implemented. It may make...
If you mark a storage pool as shared, PVE will assume its available on all nodes by default (there is an option to limit to specific nodes, but you need to set those)
If a "shared" pool doesnt exist on the destination, the migration will...
You're clearly upset, but I dont think your understanding of the situation warrants that conclusion.
The PVE OFFICIAL DOCUMENTATION is in one place: https://pve.proxmox.com/pve-docs/
but that documentation doesnt cover everything applicable to...
transfer is limited to the slowest link in the chain. You NIC chip, driver, destination nic/driver, source disk, destination disk, etc. Sounds like you have some tracing to do.
change your HBA to virtio-scsi-single, with io-thread checked for disks.