NetApp & ProxMox VE

Well, you can't share the same nfs share between 2 non-cluster nodes !

imagine that you have 2 vms with same id 100, you'll mix drives with same id, like vm-100-disk-0.raw for example.
same for vzdump backup.


only for iso, lxc templates it shouldn't be a problem
You should 100% be able to share between multiple nodes. We share NFS shares between two different clusters. The benefit of NFS is that file locks and write conflicts are solved by the sharing VM as opposed to treating it as two writes to the same block device. The abstraction layer gives you this capability just like any other network share protocol.
 
You should 100% be able to share between multiple nodes. We share NFS shares between two different clusters. The benefit of NFS is that file locks and write conflicts are solved by the sharing VM as opposed to treating it as two writes to the same block device. The abstraction layer gives you this capability just like any other network share protocol.
I fear this won't work if the two nodes are in different clusters and you use the same disk images for two different vms which shares their id.
Yes NFS file locks should take care that only one vm writes to a disk but then the other vm can't safe it's data which in the end isn't much better.

For my homelab (thus please take it with a grain of salt) I configured a cifs share but with two subfolders called "pve1" and "pve2". The single-node pve1 only sees the subfolder pve1 and pve2 only sees subfolder pve2. Since each vm would need their own disk image this setup doesnt' waste anything but takes care that I don't mess something up by accident.
For iso images and templates I could use hard links to not waste anything (I don't do this at the moment but if the need arise). The point beeing is, that I need to actually to a manuel step (creating the link) ensuring that I have to think about it before creating a potential mess.
 
Last edited:
Hi, As i see that this is a hot topic I would like to add some questions :
A bit of context :
We are a provider located in two distinct datacenters with all the network topology to ensure that a proxmox cluster can live there (dual inter-DC, L2 links, Redondant wan, Cisco VPC on all our network interfaces including proxmox (via LACP bound bridges)...
We are planning on buying two ASA C250 and put one in each DC and in combination with the promox HA to create an infrastructure where we are able to lose one entire DC without interruption, ensuring our users a very high availability of their services.

My questions are:
- The netapp ASA C250 replicates the VM storages between themself, but does it show one single mount point to proxmox between two replicated ASA ? Because in order for the ha to work (in my understanding) the mount point for the storage must be the same on both nodes. We have been trying for weeks now to have an answer on this matter without an answer from netapp.
- I saw on this thread that the subject was a lot focused on the ZFS part of the Netapp storage but the ASA C250 that we are planning on buying is only able to do ISCSI. Do you know if it's possible to connect these storages via the ZFS over ISCSI mounting in proxmox ? And if yes, what provider should we use ? Or are we stuck with these devices on using ISCSI and then LVM on the LUNs ? tldr: what mounting technique should we use for the ASA C250 that is only able to do block mode and have snapshot ?

Thanks in advance for your help.
 
  • Like
Reactions: Johannes S

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!