Backups from iSCSI backed LVM very slow

neolog

New Member
Mar 16, 2022
5
0
1
30
Hello,

I've found a few threads about similiar problem but none of them have a solution.

We have a 3 node cluster with multipathed iSCSI storage using IBM Storwize SAN. In Proxmox 6.4 which we use, there is a shared LVM created which stores VMs on all nodes. We didn't notice any performance issues inside the VMs, but now there is a request to do one-time backup of all the VMs which is giving me a headache.

Whatever backup target I use, there is always a bottleneck in LVM read speed which is maximally 30-35MiB/s. All the network is on 10Gbit and the connections between SAN and nodes has been tested using iperf reaching 9,xGbit/s so there is no issue in network.

I tried to reinstall proxmox to windows server 2019 for a test and using same iSCSI LUN NTFS formatted I got reading and writing speed reaching 1.3GB/s so there shouldn't be a problem with drives or raid.

Could someone please point me in the right direction on solving this? The hardware is pretty modern so I don't think the performance should be this poor.

Thank you!

P.S: I have also tested this scenario in the lab with only one node with clean proxmox install using one 10Gig cable to one extra SAN and I am reaching similiar performance so there shouldn't be an issue with multipath either.
 
Thanks I am looking at it but don't really see what should I change.

Meanwhile I tried fio benchmark tool but it just confirms the results I had with backup speeds:

Locally stored VM:
Code:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/mapper/pve-vm--100--disk--0
seq_read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=272MiB/s][r=69.7k IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=1252: Wed Mar 16 14:22:46 2022
  read: IOPS=52.0k, BW=207MiB/s (217MB/s)(12.1GiB/60001msec)
    slat (usec): min=3, max=128, avg= 5.42, stdev= 1.65
    clat (nsec): min=475, max=103917k, avg=12804.39, stdev=131254.82
     lat (usec): min=8, max=103925, avg=18.31, stdev=131.38
    clat percentiles (usec):
     |  1.00th=[    7],  5.00th=[    7], 10.00th=[    8], 20.00th=[    8],
     | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[   10], 80.00th=[   11], 90.00th=[   22], 95.00th=[   22],
     | 99.00th=[   24], 99.50th=[   31], 99.90th=[   57], 99.95th=[  133],
     | 99.99th=[ 6521]
   bw (  KiB/s): min=31896, max=328496, per=99.72%, avg=211353.03, stdev=90739.42, samples=119
   iops        : min= 7974, max=82124, avg=52838.24, stdev=22684.87, samples=119
  lat (nsec)   : 500=0.01%, 750=0.01%, 1000=0.01%
  lat (usec)   : 2=0.01%, 4=0.01%, 10=79.89%, 20=1.83%, 50=18.16%
  lat (usec)   : 100=0.06%, 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.01%, 50=0.01%
  lat (msec)   : 250=0.01%
  cpu          : usr=7.27%, sys=40.60%, ctx=3179368, majf=0, minf=22
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=3179365,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=12.1GiB (13.0GB), run=60001-60001msec


iSCSI LVM stored VM:
Code:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/storage-old-01-lvm/vm-101-disk-0
seq_read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [R(1)][88.3%][r=4888KiB/s][r=1222 IOPS][eta 00m:07s]
Jobs: 1 (f=1): [R(1)][90.0%][r=7300KiB/s][r=1825 IOPS][eta 00m:06s]
Jobs: 1 (f=1): [R(1)][91.7%][r=8224KiB/s][r=2056 IOPS][eta 00m:05s]
Jobs: 1 (f=1): [R(1)][100.0%][r=5864KiB/s][r=1466 IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=32665: Wed Mar 16 14:17:52 2022
  read: IOPS=1769, BW=7077KiB/s (7246kB/s)(415MiB/60001msec)
    slat (usec): min=4, max=172, avg=18.15, stdev= 8.03
    clat (usec): min=303, max=19480, avg=544.15, stdev=573.02
     lat (usec): min=311, max=19510, avg=562.63, stdev=573.48
    clat percentiles (usec):
     |  1.00th=[  363],  5.00th=[  396], 10.00th=[  412], 20.00th=[  437],
     | 30.00th=[  449], 40.00th=[  465], 50.00th=[  478], 60.00th=[  490],
     | 70.00th=[  502], 80.00th=[  523], 90.00th=[  553], 95.00th=[  611],
     | 99.00th=[ 2835], 99.50th=[ 5276], 99.90th=[ 7701], 99.95th=[ 9765],
     | 99.99th=[15795]
   bw (  KiB/s): min= 3248, max= 9165, per=99.98%, avg=7074.64, stdev=1429.72, samples=120
   iops        : min=  812, max= 2291, avg=1768.62, stdev=357.41, samples=120
  lat (usec)   : 500=68.23%, 750=28.78%, 1000=0.83%
  lat (msec)   : 2=0.84%, 4=0.62%, 10=0.67%, 20=0.05%
  cpu          : usr=2.00%, sys=6.36%, ctx=106151, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=106150,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=7077KiB/s (7246kB/s), 7077KiB/s-7077KiB/s (7246kB/s-7246kB/s), io=415MiB (435MB), run=60001-60001msec
 
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/mapper/pve-vm--100--disk--0 seq_read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
I am not sure about all the options, as my FIO experience is very low. I was using IOmeter for my tests.

However the value "iodepth"=1 actually means that there is an individual IO in the queue. When that is confirmed the next IO is generated. At least that his how other Test-tools are working. So imho this is expected that the values you see are not good. You should start a test with 16-32 outstanding commands. This will raise throughput and IOs as well.

Also a block-size of 4k will not provide a lot of bandwith need/performance. You also should consider to try 16k, 32k or 64k blocksize. Bandwidth also should increase.

Imho your measuring is problematic (at least from my perspective).

I have no clue/valid explanation why your backup tool does not perform,
HTH
 
I am not sure about all the options, as my FIO experience is very low. I was using IOmeter for my tests.

However the value "iodepth"=1 actually means that there is an individual IO in the queue. When that is confirmed the next IO is generated. At least that his how other Test-tools are working. So imho this is expected that the values you see are not good. You should start a test with 16-32 outstanding commands. This will raise throughput and IOs as well.

Also a block-size of 4k will not provide a lot of bandwith need/performance. You also should consider to try 16k, 32k or 64k blocksize. Bandwidth also should increase.

Imho your measuring is problematic (at least from my perspective).

I have no clue/valid explanation why your backup tool does not perform,
HTH
Actually it is almost same when using 16k, 32k or 64k blocks, almost like it has been cut off at 11MB/s..:

Code:
iodepth=1, 16k: READ: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=616MiB (646MB), run=60003-60003msec
iodepth=1, 32k: READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=608MiB (637MB), run=60004-60004msec
iodepth=1, 64k: READ: bw=10.2MiB/s (10.7MB/s), 10.2MiB/s-10.2MiB/s (10.7MB/s-10.7MB/s), io=612MiB (642MB), run=60005-60005msec

iodepth=16, 16k: READ: bw=6626KiB/s (6785kB/s), 6626KiB/s-6626KiB/s (6785kB/s-6785kB/s), io=388MiB (407MB), run=60033-60033msec
iodepth=16, 64k: READ: bw=9119KiB/s (9338kB/s), 9119KiB/s-9119KiB/s (9338kB/s-9338kB/s), io=535MiB (561MB), run=60074-60074msec

iodepth=16, 4M: READ: bw=9132KiB/s (9351kB/s), 9132KiB/s-9132KiB/s (9351kB/s-9351kB/s), io=576MiB (604MB), run=64590-64590msec
 
Are you sure that your interfaces are actually syncing with the correct speed?
That smells like a 100Mbit interface somehow...
 
Yes I was also thinking about that, but there is only one active interface on the node which is also used for internet connection in VMs, I've been downloading ISOs with about 37MB/s speeds so definitely not 100Mbit on server side.

On storage side, there was 1.2GB/s speed reading from NTFS backed iSCSI when the node was running Windows Server2019, so not 100Mbit on storage side also...
 
Hello, today I was doing some more tests on the production cluster instead of lab cluster. When I run fio with bigger iodepth on production cluster, I am getting better results, so it looks like the 100Mbit problem is only in the lab..

Code:
PRODUCTION: fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=64K --numjobs=1 --iodepth=16 --runtime=60 --time_based --name seq_read --filename=/dev/mapper/vg_storage--test02-vm--116--disk--0

I am getting:
Code:
READ: bw=1634MiB/s (1713MB/s), 1634MiB/s-1634MiB/s (1713MB/s-1713MB/s), io=95.7GiB (103GB), run=60001-60001msec

However VZDUMP backup is still at the 35MB/s even on production cluster. Is there any way to do vzdump backup with more IO? With iodepth=1 I am getting the same 35MB/s so if I could make backup with more IO then it should be much faster.

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!