LVM + mounted LV into dir or Directory ext4?

kriznik

Member
Sep 29, 2023
37
3
8
Hello I'm wondering about what's better solution (or what are main differencies) between:

1. creating LVMThin + LV mount into the dir
2. create Directory, ext4 (UI)

?

my usecase is;
proxmox installed on HW raid1 SSD's (1TB)
datastorage HW HDD raid10 (40TB)

so in proxmox I have sda and sdb, sda contains host + local-lvm as by default
I want to have sdb mainly as datastorage, which I will share via smb via lxc. As well I want to have it as nfs from host - eg. I want to have it mounted in the folder somewhere so I can export dir for nfs.

Maybe I dont fully understand difference of Directory vs. LVMThin. From UI LVMthin cannot be mounted as dir, but I can do it from shell indeed.
Are those similar/same?
 
Last edited:
Up this question. My usecase is a bit different (thinclient)

  • 1x small 16GB SSD with only local for my PVE host only
  • 1x 4TB SSD for (shared) data of/for LXC’s and (only/mainly) LXC rootfs disks
Now I have the same two options basically as OP:


1. Make 2 partitions: One LVM-Thin for disk images of the CT‘s + 1 partition as directory (xfs/ext4) and directly bind mount that dir to any lxc

2. Make a thinpool with one vg and (at least) 2 lv‘s: x number of lv‘s for each ct-rootfs (as in the first method) and additionally z numbers of lv‘s for data storage: „/media“ „/photos“ etc. with xfs/ext4 as file system on that manually created lv‘s and then bind mount those lv‘s to the PVE host to be able to share that host directory on each lxc.

What is „better“/most commonly used and why is it.

Would be great to find a definite answer as Google is not helping much after searching for a long time.
 
Last edited:
Why not ZFS? It can do all of that and the storage is shared among all things. I also find management a lot simpler and you save space due to compression, etc.
 
  • Like
Reactions: Johannes S
I would also recommend ZFS since it's more flexible than LVM /LVM-thin, see also:

You need to understand what LVM is and the differences between it and directory.

First lvm is a Volume Manager, it allows splitting up one physical storage in different virtual volumes which then can be used however you need them. Lvm a core feature of the linux kernel and the usual usecase is to create a filesystem on it to use it durectly with Linux. But you don't need to use it, if you have a Software who can "speak" directly with the data on the volume you don't need a filesystem. Most Software however expect being able to access the storage via the filesystem: It's a layer so developers don't need to do everything manually which is already Provider by the OS. This comes with a cost though: This layer will need some resources for doing it's work. In virtualization this adds up: You have the filesystem of the vm operating system plus the filesystem of the storage where you save the virtual discs of the vm.

For that ( and other reasons) qemu and lxc ( the actual Programs used by ProxmoxVE for vms and containers) allows using lvms cirtual volumes as Block storage: They are not formatted with a filesystem but instead virtual volumes are created and assigned to the vms or lxcs to be used as virtual discs. This only works for vietual discs though, for everything else ( container templates, ISP images, backups etc) you still need a regular directory on a filesystem. You can also use this directory for virtual disk images, it's off by default for directory. For many people ( especially if hey are new to virtualization or used other virtualization Software before ) this is more intuitive, but I wouldn't recommend it due to the performance penalty of the additional filesystem. To give an idea: In this forum people reported of 15-20% of better performance for block storage.

Since most people will Primarstufe use their system to have vms or lxcs the ProxmoxVE installer creates a relative small directory storage and a larger Block storage. This fits most usecases but if at some point you decide you need another setup it's a bit envolved to resize them.

ZFS is a different story: It's a combination of a volume Manager and a filesystem. This allows ProxmoxVE ( or to be more precise lxc/qemu ) to use it as Block storage for vm/lxc virtual discs and as directory storage for everything else. It also allows a dynamic allocation, so you don't need to decide how much space you need for virtual discs and how much space for everything else.
ZFS also has SW RAID thus you don't need a HW raid Controller to get the relieability/redundancy of HW raid. In fact ZFS and HW Raid don't play nice together so if you want to use ZFS you need to change your RAID Controllers Operation mode to "IT mode/HBA mode".

How you share storage between lxcs or vms is basically a question of preferences.
I usually have one virtual disc for the vm/lxcs operating system, for the application data I do this:
  • If the vm/lxcs don't need to share the data with other guests/lxcs I just create one more additional virtual discs ( depending on the application) to seperate the actual data and the OS.
  • If the data is expected to get rather large or needs to be accessed via network ( E.g. from my Notebook or my other ProxmoxVE nodes ) I create a dedicated network share on my NAS and mount it to the vm or lxc ( for unpriviliged lxcs they are mounten on the host first and then bind mounted to the lxc)
  • If I'm really sure I don't need to access it via network I could use virtiofs ( vms) or a bind mount for sharing between guests on the same ProxmoxVE node. Up to now I never needed this. I also remember reports, that VirtIOFS is slower than NFS, so at least for vms I propably would also use network shares between them.

This approach is also used to keep my backup costs affordable: Since my data is seperated from the OS of the vms/lxcs I can exclude it from a backup in the settings. One of my guests has a rather large cache directory for temporary files ( 100 GB), where a backup would be pointless. And the network shares from my NAS are not backed up anyhow by the regular backup function. This is ok: I have a vserver with ProxmoxBackupServer for the vms and lxcs but it doesn' have enough storage space to contain the data from the NAS. Instead the NAS is backed up to a more affirdable cloud storage ( Hetzner storagebox to be precise).
 
Last edited:
  • Like
Reactions: Onslow and UdoB