NetApp & ProxMox VE

Well, you can't share the same nfs share between 2 non-cluster nodes !

imagine that you have 2 vms with same id 100, you'll mix drives with same id, like vm-100-disk-0.raw for example.
same for vzdump backup.


only for iso, lxc templates it shouldn't be a problem
You should 100% be able to share between multiple nodes. We share NFS shares between two different clusters. The benefit of NFS is that file locks and write conflicts are solved by the sharing VM as opposed to treating it as two writes to the same block device. The abstraction layer gives you this capability just like any other network share protocol.
 
You should 100% be able to share between multiple nodes. We share NFS shares between two different clusters. The benefit of NFS is that file locks and write conflicts are solved by the sharing VM as opposed to treating it as two writes to the same block device. The abstraction layer gives you this capability just like any other network share protocol.
I fear this won't work if the two nodes are in different clusters and you use the same disk images for two different vms which shares their id.
Yes NFS file locks should take care that only one vm writes to a disk but then the other vm can't safe it's data which in the end isn't much better.

For my homelab (thus please take it with a grain of salt) I configured a cifs share but with two subfolders called "pve1" and "pve2". The single-node pve1 only sees the subfolder pve1 and pve2 only sees subfolder pve2. Since each vm would need their own disk image this setup doesnt' waste anything but takes care that I don't mess something up by accident.
For iso images and templates I could use hard links to not waste anything (I don't do this at the moment but if the need arise). The point beeing is, that I need to actually to a manuel step (creating the link) ensuring that I have to think about it before creating a potential mess.
 
Last edited:
Hi, As i see that this is a hot topic I would like to add some questions :
A bit of context :
We are a provider located in two distinct datacenters with all the network topology to ensure that a proxmox cluster can live there (dual inter-DC, L2 links, Redondant wan, Cisco VPC on all our network interfaces including proxmox (via LACP bound bridges)...
We are planning on buying two ASA C250 and put one in each DC and in combination with the promox HA to create an infrastructure where we are able to lose one entire DC without interruption, ensuring our users a very high availability of their services.

My questions are:
- The netapp ASA C250 replicates the VM storages between themself, but does it show one single mount point to proxmox between two replicated ASA ? Because in order for the ha to work (in my understanding) the mount point for the storage must be the same on both nodes. We have been trying for weeks now to have an answer on this matter without an answer from netapp.
- I saw on this thread that the subject was a lot focused on the ZFS part of the Netapp storage but the ASA C250 that we are planning on buying is only able to do ISCSI. Do you know if it's possible to connect these storages via the ZFS over ISCSI mounting in proxmox ? And if yes, what provider should we use ? Or are we stuck with these devices on using ISCSI and then LVM on the LUNs ? tldr: what mounting technique should we use for the ASA C250 that is only able to do block mode and have snapshot ?

Thanks in advance for your help.
 
  • Like
Reactions: Johannes S
Have run VMWare with NFS backend on NetApp and it's worked nicely. Proxmox seems to do the same - there's good utility around dedupe and hot-cache with very similar disk images and instances.

In general the thing I think would be most useful is some sort of API integration element for copying/creating templates and maybe hooking snapmirror into the backup/snapshot operations.

Similarly we've been looking at doing 'cold site' and 'DR replica' stuff using snapmirror replicas, which of course are necessarily read-only, but I feel it should be possible to use snapmirror integrated migration to handle disk/memory sync to 'move' an instance to a different filer entirely, and I'm looking at scripting this using the API.

I'm less than keen on running synchronous replication ubiquitously, but I think as a temporary 'sync the disks, copy the ram, quiesce and cut over' type operation it could work quite nicely and smoothly.

Sort of 'active-active' volumes, but just relying on the fact that vm disks are arbitrated by proxmox and won't _actually_ be experiencing concurrent IO outside of a VM move.

I've not seen if there is one yet, but a whitepaper talking about best practices around co-hosting images/templates/deduplication etc. might be sensible too. I mean, snapshots on a 'churny' disk - one with active VMs - will be a very different amount of data-per-day and overall volatility vs. the 'backups' and 'images', so some recommendations around 'is it a good idea to separate the volumes or not?' would be nice.
 
This thread already has some insights. Plus I believe this two links would help you

https://docs.netapp.com/us-en/netapp-solutions/proxmox/proxmox-ontap.html

And plugin thread