Backup from LVM to any where is very Slow

Jiyon

New Member
Jul 16, 2019
10
0
1
Hello everybody,

we have set up a cluster with 2 nodes, both are connected to our SAN via iSCSI. On the SAN a LUN was created which we mounted with multipath to the servers. The storage was then integrated as LVM and we installed some VMs on it.
Now I wanted to test the backup. So I tried to backup to a NAS first.
Then we got a read/write speed of 6/3 MB/s.
When backing up to the local HDD almost the same result.

If we make a clone of the VM it will be ready in about 2 minutes at 32GB.

Now we moved the HDD from the LVM to the NFS mounted NAS, and then made a bakup to HDD. Then we get a read/write speed of 35/30MB/s.

Is that normal?
Is this due to the conversion LVM (block) to file?

Do you have a recommendation for us on how best to integrate the SAN? Was that the best way?
 
In general, all numbers look very, very slow to me, I'm used to have figures reaching from 400-1500 MBytes/s for a multipathed fiber-channel based storage in sequential read (which a backup certainly is). What is the "normal" speed of your SAN? If you don't know, for a very quick number on sequential read (not the best measurement in the world, but'll do), just do something like this:

Code:
$ dd if=/dev/mapper/multipathed_device of=/dev/null bs=128k

and report back the numbers (whole output in CODE-tags please).

Do you have a recommendation for us on how best to integrate the SAN? Was that the best way?

Looks correct. LVM on top of multipathed LUNs.
 
  • Like
Reactions: Jiyon
report back the numbers (whole output in CODE-tags please).

VM with Storage on local-LVM
Code:
dd if=/dev/mapper/pve-vm--104--disk--0 of=/dev/null bs=128k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB, 32 GiB) copied, 16.4588 s, 2.1 GB/s

VM with Storage on SAN
Code:
dd if=/dev/mapper/PR--STORAGE-vm--102--disk--0 of=/dev/null bs=128k
245760+0 records in
245760+0 records out
32212254720 bytes (32 GB, 30 GiB) copied, 230.607 s, 140 MB/s

Our SAN is a Netapp E2824 with 12 Seagate SAS HDD.
 
Last edited:
VM with Storage on SAN
Code:
dd if=/dev/mapper/PR--STORAGE-vm--102--disk--0 of=/dev/null bs=128k
245760+0 records in
245760+0 records out
32212254720 bytes (32 GB, 30 GiB) copied, 230.607 s, 140 MB/s

Those numbers look familiar to me. Every iSCSI SAN I benchmarked was very very slow (with respect to the fiberchannel ones) and this is no exception (all were Netapps iSCSI).

Question is, can you benchmark on another system that does not "feel" slow?

With respect to the backup, your numbers should be also in that range of ~140MB/s. Are you compressing your backup, if so with what compression algorithm?
 
That bothers me too.
But what I don't understand is if I clone or move the VM from the SAN (LVM) to the local HDD (on one of the Nodes - locale-lvm) or to an NFS share, it takes about 2 or 3 minutes for 30GB.

Why is the backup so slow?

Now we have a second LUN included as NFS and this is a lot faster.
So there seems to be a problem with the LVM!?

But here we have the problem that a node has to act as an NFS server.

How did you integrate your SAN? We followed this Guide https://icicimov.github.io/blog/vir...-volume-to-Proxmox-to-support-Live-Migration/
Where do you put your backup? (NFS,CEPH,ZFS)
 
Now we have a second LUN included as NFS and this is a lot faster.

Did I understand you correctly that your VM lives on NFS and get backed up to the local disk and that is fast?


Guide is totally ok and no errors in it. I did not use any guide, I write guides. I do things like this for living :-D (at least with fiberchannel, not iSCSI. I recommend fiberchannel everywhere I go).

Where do you put your backup? (NFS,CEPH,ZFS)

Via NFS to our backup server. We saturate the network link with our backups from multiple servers that are backing up all at once.
 
Did I understand you correctly that your VM lives on NFS and get backed up to the local disk and that is fast?
This was just a Test. :)

We did change the MTU Size from PVE Server and SAN, but unfortunately this had no positive effect on the performance.

Now I stumbled across this entry.
And I activated the cache in the VM and now I have 258 MB/s for backup.
I don't understand that.
Why did the setting of the Virt. HDD effect on the performance of the backup.
And then only if this HDD is on the LVM of the SAN.
Very strange.

upload_2019-7-18_15-59-36.png
 
Now I stumbled across this entry.
And I activated the cache in the VM and now I have 258 MB/s for backup.
I don't understand that.

Hi,

I think you are using write-back as cache for VM (sorry, I do not understend german)
If is true, then this do not seems to be ok. The cache write-back will say that the data ok, but in reality is not on disk. You can test this with any network tool like iftop and see if the network traffic will continue even the backup will be finish.
 
Hi, sorry for posting on an old thread.

Hi,

I think you are using write-back as cache for VM (sorry, I do not understend german)
If is true, then this do not seems to be ok. The cache write-back will say that the data ok, but in reality is not on disk. You can test this with any network tool like iftop and see if the network traffic will continue even the backup will be finish.

It seems the cache strategy has a significant impact on the performance. Just wondering if write-back is not ok, then what cache strategy do you recommend to use while still maintaining a quite good performance? Thanks.
 
Hi,

Performance and safety of data are like watter and oil. Good performance => un-safety data. Is up to you what you chose! You can not have both on the same time. In my own case I allwais chose SAFETY. You can explain to your manager that you have a perforformance problem, but you NOBODY cand understend why you lose some data :)

Good luck !/Bafta
 
Hi,

Performance and safety of data are like watter and oil. Good performance => un-safety data. Is up to you what you chose! You can not have both on the same time. In my own case I allwais chose SAFETY. You can explain to your manager that you have a perforformance problem, but you NOBODY cand understend why you lose some data :)

Good luck !/Bafta
Got it. Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!