Hey people,
Could you kindly clarify few things regarding enterprise like virtualization and proxmox.
I'm originally from a VMWARE camp, but experimenting now with migration to Proxmox, as we are non-profit organization and just don't have budgets.
So, traditionally, for redundant HA virtualization I'd deploy 3 hosts in a cluster and 2 DAS/SAS (or FC-SAN, or 10G iSCSI - doesn't really matter) fully shared physical storage appliances - one for vms and data, second with deduplication possibly, solely for backups (veeam with direct SAN transport backup path I used all the time)
I've read many threads here and a watched lots of youtube videos regarding the topic.
Correct me if I'm wrong with the following:
1. As there are no readily avail cluster aware FS solution here (similar to VMFS) and also as ZFS doesn't really work well in a cluster shared storage case with snapshots and all the whistles, I assume block storage solution is not really an option for us and the only feasible solution right here right now is Proxmox Cluster File System (pmxcfs) for VM configs and a NFS cluster shared storage with qcow2 file based data disks (most probably just TrueNAS NFS share). Am I correct?
May be there are "more better" solutions out there now in mid 2022. It will allow me to use NFS backend on primary storage as a datastore with immediately avail snapshots, VM migration, availability for HA managed VMs and easy trouble free backups. (back ups embedded in Proxmox)
2. In such case what about FC-SAN or SAS DAS? As here it doesn't really makes sense, like LVM thin doesn't work either nor ZFS. Am I correct?
3. As for the NFS redundancy (the way I see it) I assume it absolutely required to implement link aggregation from storage to hosts with obviously isolated storage LAN.
4. PBS backup server - any particular idea if I need it implemented other than tape library support? I mean for our small infrastructure, internal backup schedules on cluster level, which comes out of the box, are totally enough... regarding pruning, well, I'll figure something out as that's plain files there.
5. I enjoy reading @bbgeek17 posts and wonder if I want\need Blockbridge storage driver for Proxmox with my 3rd party storage appliances. Not sure if guys have some level of community version. Pricing for commercial version is a question also.
6. Any gotchas or tips I can refer to with implementing NFS data backend in production? Am I missing something with such setup? Like it feels too good honestly, any drawbacks with that? We have about 40VMs mixed: windows, AD, file servers, linux servers, MS SQL, ms exchange with 9TB storage. Plan is to migrate to open-source completely so we'll drop everything Microsoft based except client machines.
7. in some threads here people reported poor 10G performance in conjunction with Proxmox-TrueNAS, like close to 5Gbs, anyone have fresh experience with that? like here https://forum.proxmox.com/threads/prox-truenas-10g-connection-sharing-results.110202/
Sorry for such a long list, I hope it will be useful to have it in one place in case people have similar cases.
Regards,
MarvinFS
Could you kindly clarify few things regarding enterprise like virtualization and proxmox.
I'm originally from a VMWARE camp, but experimenting now with migration to Proxmox, as we are non-profit organization and just don't have budgets.
So, traditionally, for redundant HA virtualization I'd deploy 3 hosts in a cluster and 2 DAS/SAS (or FC-SAN, or 10G iSCSI - doesn't really matter) fully shared physical storage appliances - one for vms and data, second with deduplication possibly, solely for backups (veeam with direct SAN transport backup path I used all the time)
I've read many threads here and a watched lots of youtube videos regarding the topic.
Correct me if I'm wrong with the following:
1. As there are no readily avail cluster aware FS solution here (similar to VMFS) and also as ZFS doesn't really work well in a cluster shared storage case with snapshots and all the whistles, I assume block storage solution is not really an option for us and the only feasible solution right here right now is Proxmox Cluster File System (pmxcfs) for VM configs and a NFS cluster shared storage with qcow2 file based data disks (most probably just TrueNAS NFS share). Am I correct?
May be there are "more better" solutions out there now in mid 2022. It will allow me to use NFS backend on primary storage as a datastore with immediately avail snapshots, VM migration, availability for HA managed VMs and easy trouble free backups. (back ups embedded in Proxmox)
2. In such case what about FC-SAN or SAS DAS? As here it doesn't really makes sense, like LVM thin doesn't work either nor ZFS. Am I correct?
3. As for the NFS redundancy (the way I see it) I assume it absolutely required to implement link aggregation from storage to hosts with obviously isolated storage LAN.
4. PBS backup server - any particular idea if I need it implemented other than tape library support? I mean for our small infrastructure, internal backup schedules on cluster level, which comes out of the box, are totally enough... regarding pruning, well, I'll figure something out as that's plain files there.
5. I enjoy reading @bbgeek17 posts and wonder if I want\need Blockbridge storage driver for Proxmox with my 3rd party storage appliances. Not sure if guys have some level of community version. Pricing for commercial version is a question also.
6. Any gotchas or tips I can refer to with implementing NFS data backend in production? Am I missing something with such setup? Like it feels too good honestly, any drawbacks with that? We have about 40VMs mixed: windows, AD, file servers, linux servers, MS SQL, ms exchange with 9TB storage. Plan is to migrate to open-source completely so we'll drop everything Microsoft based except client machines.
7. in some threads here people reported poor 10G performance in conjunction with Proxmox-TrueNAS, like close to 5Gbs, anyone have fresh experience with that? like here https://forum.proxmox.com/threads/prox-truenas-10g-connection-sharing-results.110202/
Sorry for such a long list, I hope it will be useful to have it in one place in case people have similar cases.
Regards,
MarvinFS
Last edited: