LVM as Shared Directory

XMlabs

Member
Feb 1, 2018
8
0
6
31
Hi,

I have some bandwith problems for my backup. At now we are doing all backup with proxmox tool using nfs protocol but this take a lot of time ( about 14h for 20tb of vms ).
All of our storage are manage via 8Gb/s FC and LVM then we have 2 redundant gigabit networks, one dedicated for proxmox cluster management and the other one for vm traffic.
We expect to duplicate our vms in the next 4 months but there are some problem:
- FC not support Thin-LVM
- on LVM storage is not possible to do snaphot
- is not possible to have 28h of backup job
- we can't upgrade the gigabit management network to a 10Gb network for now ( budget reasons )
- we are already using gzip on backup jobs
- on LVM partition is not possible to store backup

And now the question:
I thought to install an HBA card on my backup NAS and publish the storage via FC then attach this storage to my Proxmox nodes and create a Shared directory over LVM and then do the backup job over 8Gb channel

We are testing differential backup in stage enviroment, but I need to find a solution for production eviroment asap.
I'm sure that my NAS support FC protocol but I don't sure if it's possible to create a shared directory over LVM in my case.

I have also read this articles:
- https://forum.proxmox.com/threads/use-local-lvm-as-directory.32924/
- https://pve.proxmox.com/wiki/LVM2#Create_a_extra_LV_for_.2Fvar.2Flib.2Fvz
- https://forum.proxmox.com/threads/incremental-backup-solution.39329/
- https://forum.proxmox.com/threads/backup-solutions.21443/

I'm using Proxmox VE 5.1-41
 
Last edited:
I feel with you, we have a similar setup (1GBE for backup via NFS and 8 Gbit FC), yet not so much TB. We also need roughly 5h.

Do you have an active/passive setup for your bond or an LACP one? Maybe an LACP could yield better throughput (only twice but better than nothing).

Using a shared filesystem on top of a shared LUN is a viable option if you can decide what shared filesystem to use. I've never seen that you can run IP or any other network protocol over FC (the other way around works with FCoE), so a simple IP-network and syncing over it does most probably not work.

Depending on the number of nodes you backup from, the compression is not the limiting factor with recent CPUs, so disabling or reducing it to a faster one does not yield to better throughput. Increasing it might, if you also use a parallelized compression method, if it is currently supported (haven't looked into this for a long time, yet remember to have read about it)
 
On my nas I already have an active/passive over 2 lacp ( 4 NIC total ).
My nas is able to use FC protocol and I can attach it to my Fabric
In my enviroment I have 6 nodes for production
during backup the network of my nas have a 90% of workload...
I'm tring to reliaze this project and I'll let you know.
 
This is honestly Proxmox's biggest flaw. There is no viable backup solution. It is full backup or nothing. Proxmox team says "throw more hardware at it" as answer, as much as I appreciate their work and love the Proxmox VE software, that is a poor answer.

I'm also in the same boat, gigabit network for backups to an NFS server, 10TB on a iSCSI SAN (LVM, no snapshots :-/

Backups I can only realistically do once per week per VM to keep in a realistic window. Wouldn't be a problem if we had differential backup support. I would subscribe for this, right now I sort of flip flop back and forth between moving to vmware essentials so I can use Veeam or IASO, I don't know which is worse, the lack of features on vmware at that level, or the lack of good backup solution (but everything else being awesome) on Proxmox. :(
 
This is honestly Proxmox's biggest flaw. There is no viable backup solution.

Why no using a third party backup solution? E.g. a lot are telling the VMware backup is so great but in fact they all use a third party tool.

Just do the same on Proxmox VE, if our integrated backup is not suitable.

BTW, almost all others like VMware does not even have a simple Backup integrated, AFAIK.
 
Aside from in-guest backup agents (which make managing a large number of backups difficult vs hypervisor integrated solutions like Veeam) there are no third party backup tools for Proxmox at least that I'm aware of (but would like to know if something Veeam-like exists).

From what I can tell the best strategy for large volume offsite DR type backups on Proxmox (that is storage model agnostic) is to have a ZFS based NAS as a backup target. On the NAS enable dedup and compression in ZFS, and then have Proxmox backup guests uncompressed to the NAS. The NAS can then use ZFS replication offsite. The only bottleneck to this as far as I can tell is the default inability to do anything except full backups. However I will try to Ayufan's differential backup patches which should make weekly full + daily diffs a reality, I hope.

It would be great if differential support could eventually return to Proxmox some day rather than it being a workaround, this would make backing up larger Proxmox clusters much more efficient.

Aside from the backup issue though I love Proxmox, it is an absolutely amazing platform.

Edit: Full backups it is; it seems as though the ayufan patches won't help much in my situation since it would have to read the full backup file from the NAS to compute the diff, still creating a similar bottleneck I think. Changed block tracking would be the only way to deal with this I think, or to use snapshots as diffs which isn't supported in the storage model I'm using. I see why the decision was made to stick with full backups...

Question though; if snapshots aren't supported on shared LVM because they aren't cluster aware, is there a risk of any corruption doing the "snapshot" method for a backup or is this unrelated?
 
Last edited:
Aside from in-guest backup agents (which make managing a large number of backups difficult vs hypervisor integrated solutions like Veeam) there are no third party backup tools for Proxmox at least that I'm aware of (but would like to know if something Veeam-like exists).

Did you every tried Veeam Agent based backup with Proxmox VE (Backup a VM like you backup a physical host with Veeam)? Backup is a huge topic as you also need application aware backup agents in a lot of cases. This means, there is never ONE backup tool for all scenarios, its always a mix of tools.

From what I can tell the best strategy for large volume offsite DR type backups on Proxmox (that is storage model agnostic) is to have a ZFS based NAS as a backup target. On the NAS enable dedup and compression in ZFS, and then have Proxmox backup guests uncompressed to the NAS. The NAS can then use ZFS replication offsite. The only bottleneck to this as far as I can tell is the default inability to do anything except full backups. However I will try to Ayufan's differential backup patches which should make weekly full + daily diffs a reality, I hope.

It would be great if differential support could eventually return to Proxmox some day rather than it being a workaround, this would make backing up larger Proxmox clusters much more efficient.

Aside from the backup issue though I love Proxmox, it is an absolutely amazing platform.

Edit: Full backups it is; it seems as though the ayufan patches won't help much in my situation since it would have to read the full backup file from the NAS to compute the diff, still creating a similar bottleneck I think.

Question though; if snapshots aren't supported on shared LVM because they aren't cluster aware, is there a risk of any corruption doing the "snapshot" method for a backup or is this unrelated?

qemu snapshot backup in Proxmox VE backups are done inside qemu. so this is a real cool thing, as it works with ALL storage technologies and of course with LVM.
 
Thanks Tom,

I haven't tried the Veam bare metal backup in a guest, no, I'm planning to use Cloudberry on Windows guests to do VSS file level backups and SQL server, so it is as you say, application aware. I'll try setting up FreeNAS with dedupe and compression and do daily uncompressed image backups and zfsend them to an offsite FreeNAS; then backup inside the guests. The image backups are for faster recovery, the in-guest agent will be for better granular restore and data consistency. No magic bullet I guess.
 
Hi,

I already have a FreeNAS with dedup over ZFS published via NFS, my problem is not the space but just the network traffic. I can't do an uncompressed backup because it produce more network traffic/load and take more time ( in my case ). I also need a image-level backup and for this reason I can't use a 3° part backup solution ( I'm evalutating Ayufan's differential but in stage enviroment not in production ).

It's possible to create a distributed file-system (eg. GFS or OCFS ) over a single shared LVM volume and mount it on a local storage?
Something like that:

#Create pv and related vg
pvcreate --metadatasize 250k -y -ff /dev/mapper/<storage path>
vgcreate >vgname> /dev/mapper/<storage path>

#create an LVM via web-GUI and mount it as shared on all node

#create a gfs filesystem over <storage path> and mount it
mkfs -t gfs -p lock_dlm -j 8 -t sasbk:repobk /dev/mapper/<storage path>

mount /dev/mapper/<storage path> /var/lib/vz

Or it's possible to remove the limitation of proxmox on backup storage?

However vmWare have a backup solution, you can use VDP over SAN storage, is not longer supported but you can use it.
 
Last edited:
It seems for large volumes of backups there are two options- in-guest agent that does continuous synthetic full incrementals (Cloudberry does this with AWS S3 backend), or to do the backups at the SAN level and trigger quiescing in the VM for consistency. But it looks like in-guest backup is probably the best it just makes a mass restore in a disaster scenario very laborious.

I'm ok for now doing the fulls and using ZFS to dedupe and compress it. Perhaps by the time I fill my 10TB SAN up I'll have 10gig anyway and it will be less of an issue. I think Veeam only works the way it does because VMFS is a custom filesystem, proxmox has to work within the constraints of LVM which does not support cluster aware snapshots. If it did, incremental backups might be easier to do.
 
Two things:

Backup to ZFS - in an intelligent way - is possible without dedup as I have presented at ProxTalks 2017 with ease in a similar fashion you described, yet faster (because it transfers the compressed files over network and inflates them to raw disk on the backup site) and less slowness (dedup is utterly slow if done right for VMs at 4K blocklevel and only then you can really save space) by using CoW techniques and only synching what needs to be synced (or is different than before). I feel with you about big VMs, yet there is also a solution for this:

Using ZFS inside of your VM. We have multiple file servers with approximately 1 TB each, which have a ZFS pool on one disk (in addition to the OS disk), which is normally not backed up (the os although is) and has 15-minutely snapshots on office hours and replicates itself directly to our ZFS backup server. This dramatically decreased our backup times because of pure send/receive between pools and perfectly integrates in second-level off-site backup solutions for ZFS.

This works very well for us because the "most frequently used" servers are backed up via ZFS, the "frequently used, yet not so big" via ordinary compressed vzdump and the "no so much changing" servers are only backed up once a week.
 
Oh ... one thing I missed ...

If it did, incremental backups might be easier to do.

easier yes, but not necessarily faster. For incremental backups, you need to compare the already backed-up disk with the current one, so a "dump" solution has to read both and compare. You run into the same bandwidth problem you already have. You can only do it with an intelligent program that uses only checksums of blocks to transfer over the wire, so you cannot use default Linux file-based (e.g. NFS) programs, you need something special like rsync over ssh or ... commercial programs like Veeam.
 
All of this is true, but in my case on 20tb or vms just 2-3 have a large vmdisk with witch I have another plan of backup that work fine. All of other vm have from 50 to 100gb of vdisk and I need to backup it as image.
So, it's possible in some way to do what I've describe previously?

#Create pv and related vg
pvcreate --metadatasize 250k -y -ff /dev/mapper/<storage path>
vgcreate >vgname> /dev/mapper/<storage path>

#create an LVM via web-GUI and mount it as shared on all node

#create a gfs filesystem over <storage path> and mount it
mkfs -t gfs -p lock_dlm -j 8 -t sasbk:repobk /dev/mapper/<storage path>

mount /dev/mapper/<storage path> /var/lib/vz

Or it's possible to remove the limitation of proxmox on backup storage?
 
The last time I tried gfs, it crashed my kernel back in 2015. No idea if it works better now.

In principle, it should work, yet I'm unsure about your parameters, never used them (neither lvm nor gfs ones).

Or it's possible to remove the limitation of proxmox on backup storage?

the word 'limitation' suggest that they're on purpose. Even the differential patch runs into the problem I described in #12.
 
I don't think it's a proxmox limitation, it's a limitation with the various underpinning storage models.
 
ok but it's possible to do backup on a pve directory and this is managed as LVM ( i think with a filesystem over it ).
However do you know any guide to setup a shared file system on a single external volume attached to each hosts?
( gluster, ocfs, ceph, lustre or anything else )
SANbk.png
 
ok but it's possible to do backup on a pve directory and this is managed as LVM ( i think with a filesystem over it ).

Yes, with a clustered filesystem like OCFS2 and GFS2, a "simple" directory like in an "ordinary" filesystem is not possible. If it seems to work, you'll have data loss, because the filesystem is not a clustered filesystem.

However do you know any guide to setup a shared file system on a single external volume attached to each hosts?
( gluster, ocfs, ceph, lustre or anything else )

It's unsupported in PVE, so you won't find tutorials here, just google for Debian Stretch tutorials on that matter. Gluster, Ceph and Lustre are distributed shared storage file systems and do not work with shared storage as in a SAN.

We also discussed a similar topic recently concerning a clustered filesystem on LVM, yet without a satisfying conclusion.
https://forum.proxmox.com/threads/fibre-channel-san-with-live-snapshot.41205/#post-198788
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!