I would also recommend ZFS since it's more flexible than LVM /LVM-thin, see also:
Should I use ZFS at all?
For once this requires a disclaimer
first: I am using ZFS nearly everywhere, where it is easily possible, though exceptions do exists. In any case I am definitely biased
pro ZFS.
That said..., the correct answer is obviously: “
Yes, of course” ;-)
Integrity
ZFS assures integrity. It will deliver exactly the very same data when you read it as was written at some point of time in the past.
“
But all filesystems do this, right?” Well, there is more to it. ZFS works hard to
actively assure the correctness. An...
You need to understand what LVM is and the differences between it and directory.
First lvm is a Volume Manager, it allows splitting up one physical storage in different virtual volumes which then can be used however you need them. Lvm a core feature of the linux kernel and the usual usecase is to create a filesystem on it to use it durectly with Linux. But you don't need to use it, if you have a Software who can "speak" directly with the data on the volume you don't need a filesystem. Most Software however expect being able to access the storage via the filesystem: It's a layer so developers don't need to do everything manually which is already Provider by the OS. This comes with a cost though: This layer will need some resources for doing it's work. In virtualization this adds up: You have the filesystem of the vm operating system plus the filesystem of the storage where you save the virtual discs of the vm.
For that ( and other reasons) qemu and lxc ( the actual Programs used by ProxmoxVE for vms and containers) allows using lvms cirtual volumes as Block storage: They are not formatted with a filesystem but instead virtual volumes are created and assigned to the vms or lxcs to be used as virtual discs. This only works for vietual discs though, for everything else ( container templates, ISP images, backups etc) you still need a regular directory on a filesystem. You can also use this directory for virtual disk images, it's off by default for directory. For many people ( especially if hey are new to virtualization or used other virtualization Software before ) this is more intuitive, but I wouldn't recommend it due to the performance penalty of the additional filesystem. To give an idea: In this forum people reported of 15-20% of better performance for block storage.
Since most people will Primarstufe use their system to have vms or lxcs the ProxmoxVE installer creates a relative small directory storage and a larger Block storage. This fits most usecases but if at some point you decide you need another setup it's a bit envolved to resize them.
ZFS is a different story: It's a combination of a volume Manager and a filesystem. This allows ProxmoxVE ( or to be more precise lxc/qemu ) to use it as Block storage for vm/lxc cirtual discs and as directory storage for everything else. It also allows a dynamic allocation, so you don't need to decide how much space you need for virtual discs and how much space for everything else.
ZFS also has SW RAID thus you don't need a HW raid Controller to get the relieability/redundancy of HW raid. In fact ZFS and HW Raid don't play nice together so if you want to use ZFS you need to change your RAID Controllers Operation mode to "IT mode/HBA mode".
How you share storage between lxcs or vms is basically a question of preferences.
I usually have one virtual disc for the vm/lxcs operating system, for the application data I do this:
- If the vm/lxcs don't need to share the data with other guests/lxcs I just create one more additional virtual discs ( depending on the application) to seperate the actual data and the OS.
- If the data is expected to get rather large or needs to be accessed via network ( E.g. from my Notebook or my other ProxmoxVE nodes ) I create a dedicated network share on my NAS and mount it to the vm or lxc ( for unpriviliged lxcs they are mounten on the host first and then bind mounted to the lxc)
- If I'm really sure I don't need to access it via network I could use virtiofs ( vms) or a bind mount for sharing between guests on the same ProxmoxVE node. Up to now I never needed this. I also remember reports, that VirtIOFS is slower than NFS, so at least for vms I propably would also use network shares between them.
This approach is also used to keep my backup costs affordable: Since my data us seperated from the OS of the vms/lxcs I can exclude it from a backup in the settings. One of my guests has a rather large cache directory for temporary files ( 100 GB), where a backup would be pointless. And the network shares from my NAS are not backed up anyhow by the regular backup function. This is ok: I have a vserver with ProxmoxBackupServer for the vns snd lxcs but it doesn' have enough storage space to contain the data from the NAS. Instead the NAS is backed up to a more affirdable cloud storage ( Hetzner storagebox to be precise).