Hello,
We're installing a new Proxmox Server.
We have a RAID 10 Volume with NVME Disks only.
On the normal Host-System mounted as a ext4 Volume we're getting about 20GB/s read speed.
However if we're converting it into a thin-lvm and use it on a Virtual Server we're getting speeds about 8GB/s.
Why do we lost more than 50% read speed if the Volume is a LVM-Thin ?
We didn't activate any restrictions like IO etc.
Speed test (On Host System without lvm):
root@proxmox:/nvme# fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.12
Starting 1 process
TEST: (groupid=0, jobs=1): err= 0: pid=2712: Fri Jan 17 14:34:21 2020
read: IOPS=19.3k, BW=18.8GiB/s (20.2GB/s)(10.0GiB/531msec)
slat (usec): min=34, max=346, avg=50.34, stdev=18.45
clat (usec): min=226, max=10089, avg=1544.08, stdev=328.64
lat (usec): min=274, max=10429, avg=1594.46, stdev=340.25
clat percentiles (usec):
| 1.00th=[ 502], 5.00th=[ 1434], 10.00th=[ 1483], 20.00th=[ 1516],
| 30.00th=[ 1532], 40.00th=[ 1549], 50.00th=[ 1565], 60.00th=[ 1565],
| 70.00th=[ 1582], 80.00th=[ 1598], 90.00th=[ 1631], 95.00th=[ 1647],
| 99.00th=[ 1729], 99.50th=[ 1811], 99.90th=[ 7308], 99.95th=[ 8717],
| 99.99th=[ 9765]
bw ( MiB/s): min=19238, max=19238, per=99.76%, avg=19238.00, stdev= 0.00, samples=1
iops : min=19238, max=19238, avg=19238.00, stdev= 0.00, samples=1
lat (usec) : 250=0.01%, 500=0.99%, 750=0.98%, 1000=1.03%
lat (msec) : 2=96.71%, 4=0.08%, 10=0.21%, 20=0.01%
cpu : usr=1.32%, sys=97.55%, ctx=125, majf=0, minf=8201
IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=93.6%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=18.8GiB/s (20.2GB/s), 18.8GiB/s-18.8GiB/s (20.2GB/s-20.2GB/s), io=10.0GiB (10.7GB), run=531-531msec
Disk stats (read/write):
hptblock0n0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
Thanks in advance!
We're installing a new Proxmox Server.
We have a RAID 10 Volume with NVME Disks only.
On the normal Host-System mounted as a ext4 Volume we're getting about 20GB/s read speed.
However if we're converting it into a thin-lvm and use it on a Virtual Server we're getting speeds about 8GB/s.
Why do we lost more than 50% read speed if the Volume is a LVM-Thin ?
We didn't activate any restrictions like IO etc.
Speed test (On Host System without lvm):
root@proxmox:/nvme# fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.12
Starting 1 process
TEST: (groupid=0, jobs=1): err= 0: pid=2712: Fri Jan 17 14:34:21 2020
read: IOPS=19.3k, BW=18.8GiB/s (20.2GB/s)(10.0GiB/531msec)
slat (usec): min=34, max=346, avg=50.34, stdev=18.45
clat (usec): min=226, max=10089, avg=1544.08, stdev=328.64
lat (usec): min=274, max=10429, avg=1594.46, stdev=340.25
clat percentiles (usec):
| 1.00th=[ 502], 5.00th=[ 1434], 10.00th=[ 1483], 20.00th=[ 1516],
| 30.00th=[ 1532], 40.00th=[ 1549], 50.00th=[ 1565], 60.00th=[ 1565],
| 70.00th=[ 1582], 80.00th=[ 1598], 90.00th=[ 1631], 95.00th=[ 1647],
| 99.00th=[ 1729], 99.50th=[ 1811], 99.90th=[ 7308], 99.95th=[ 8717],
| 99.99th=[ 9765]
bw ( MiB/s): min=19238, max=19238, per=99.76%, avg=19238.00, stdev= 0.00, samples=1
iops : min=19238, max=19238, avg=19238.00, stdev= 0.00, samples=1
lat (usec) : 250=0.01%, 500=0.99%, 750=0.98%, 1000=1.03%
lat (msec) : 2=96.71%, 4=0.08%, 10=0.21%, 20=0.01%
cpu : usr=1.32%, sys=97.55%, ctx=125, majf=0, minf=8201
IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=93.6%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.2%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=18.8GiB/s (20.2GB/s), 18.8GiB/s-18.8GiB/s (20.2GB/s-20.2GB/s), io=10.0GiB (10.7GB), run=531-531msec
Disk stats (read/write):
hptblock0n0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
Thanks in advance!