You dont. I dont understand the use case enough to comment on the wisdom of the solution; please explain what you mean by VDS, and why you want proxmox inside them.
no.
There is almost NEVER a use case for nested hypervisors except for development/lab use. Even if we assume there are no cpu/ram performance degradation that occurs with modern VT extensions (hint: there are) the consequences of cascading...
mapping individual disks to vms is almost never the correct approach. You can and should present a dedicated vdisk on a highly performant, highly available storage option instead of bifurcating it and offering neither. Beyond performance, having...
The number of nodes isnt the issue; understand that since all data is replicated between site a and site b, you can only use the capacity of the smaller side no matter what you do- there is no benefit to having more OSD space on one side.
Your raid controller is most likely an LSI 9x00 based card. you can import a raid volume from virtually any LSI to any LSI RAID controller, and they are cheap and plentiful. This isnt much of a concern and can be treated as any consumable.
If...
This isnt actually true. ZFS is quite good at managing its metaslabs, and allows you to set per zvol/child fs record size. Where write amplification is a problem is when using parity raid (raidz) because trying to align written blocksize to data...
manually unlocking is what keeps this a non production ready setup. if you're serious, consider tang/clevis or other auto-unlocking mechanisms. manual intervention should only be necessary for disaster recovery.
You can absolutely do this with ZFS, I wouldn't use the RAIDZ to host VMs (os + applications) though:
https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/
This is especially true if they happen to be hdds since rotating...
Veeam doesnt work the same way on PVE as it does on vmware, so you DONT actually need snapshot support for functionality- nor does veeam even use hardware snapshots if available.
I was really excited when veeam started quietly testing pve...
Please share benchmarks. Until then, I am guessing that you eventually got the vm to address the socket housing the PCI link to the NIC. generally speaking, adding sockets does no harm AT BEST- and usually slows down the machine by introducing...
and my reply was to @AceBandge who asked
OPs question has already been answered afaict. if its not clear- use clonezilla. in the list of priorities for the devs to follow I'd much rather them add useful features or squash bugs then add a tool...
since you're able to access the webui, the issue is almost certainly incorrect thumbprint. delete and recreate the pbs datastore.
One other thing- 192.29.72.4 is a real IP address. if you are using this subnet privately- dont. the allowed...
how many mgr daemons do you have? are they running different versions of ceph?
You only need one mgr daemon. and make sure its on the same version as your monitors.
I suppose the real question is- why are you migrating a VM from your "production" group members to the "backup" group?
if its for backup, you can and should use PBS for the purpose.
If you were interested in actually running that workload on a...
Yeah I get you, but he has "MD3200" luns to all 5 nodes. if he's not mapping some to all WWNs thats by choice, not by limitation. I suppose those could be different physical devices.