Hi, I’m trying to configure a host with Proxmox 5 and ZFS because the great features, but I am not able to get a good IO performance compared to a Proxmox 5 LVM with same specs.
At this moment I’m trying with two hosts with these specs:
· Supermicro Server
· Xeon E3-1270v6
· 2x Intel SSD SC3520 240Gb
· 16 GB DDR4
In both servers I have installed Proxmox 5, in the first server I had configured disks with ext4 (no raid), in the second server I had configure disks with zfs raid 1.
Both servers are updated and using latest kernel.
In both servers I have created a Centos 7 VM with this configuration:
Server1:
bootdisk: scsi0
cores: 8
ide2: none,media=cdrom
memory: 4096
net0: virtio=7A:70:B7:AC:23:5C,bridge=vmbr0
numa: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-1,size=32G
scsihw: virtio-scsi-pci
sockets: 1
Server2:
bootdisk: scsi0
cores: 8
ide2: none,media=cdrom
memory: 4096
net0: virtio=1A:84:03:89:37:4E,bridge=vmbr0
numa: 1
ostype: l26
scsi0: local-zfs:vm-101-disk-1,size=32G
scsihw: virtio-scsi-pci
sockets: 1
I ran fio in both servers:
fio --name=randfile --ioengine=libaio --iodepth=32 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=8 --group_reporting
In the Centos 7 VM on the LVM host, I get 18981 IOPS:
randfile: (groupid=0, jobs=8): err= 0: pid=7071: Fri Sep 1 11:42:11 2017
write: io=8192.0MB, bw=75925KB/s, iops=18981, runt=110485msec
slat (usec): min=1, max=161671, avg=341.20, stdev=3200.12
clat (usec): min=196, max=207266, avg=13059.06, stdev=20009.97
lat (usec): min=199, max=207273, avg=13400.46, stdev=20256.69
clat percentiles (usec):
| 1.00th=[ 1096], 5.00th=[ 2192], 10.00th=[ 3664], 20.00th=[ 4960],
| 30.00th=[ 5728], 40.00th=[ 6496], 50.00th=[ 7200], 60.00th=[ 8256],
| 70.00th=[ 9664], 80.00th=[11840], 90.00th=[22400], 95.00th=[60672],
| 99.00th=[105984], 99.50th=[120320], 99.90th=[152576], 99.95th=[164864],
| 99.99th=[189440]
bw (KB /s): min= 5282, max=117656, per=12.60%, avg=9563.05, stdev=3815.41
lat (usec) : 250=0.01%, 500=0.04%, 750=0.22%, 1000=0.50%
lat (msec) : 2=3.53%, 4=7.59%, 10=60.33%, 20=16.83%, 50=5.15%
lat (msec) : 100=4.44%, 250=1.37%
cpu : usr=0.50%, sys=1.93%, ctx=437658, majf=0, minf=242
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=2097152/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=8192.0MB, aggrb=75925KB/s, minb=75925KB/s, maxb=75925KB/s, mint=110485msec, maxt=110485msec
Disk stats (read/write):
dm-0: ios=0/2614403, merge=0/0, ticks=0/11926981, in_queue=11947665, util=94.13%, aggrios=0/2655908, aggrmerge=0/68964, aggrticks=0/11176185, aggrin_queue=11197603, aggrutil=94.05%
sda: ios=0/2655908, merge=0/68964, ticks=0/11176185, in_queue=11197603, util=94.05%
However in the Centos 7 VM on the ZFS host, I only get 4989 IOPS:
randfile: (groupid=0, jobs=8): err= 0: pid=7184: Fri Sep 1 11:47:26 2017
write: io=8192.0MB, bw=19959KB/s, iops=4989, runt=420285msec
slat (usec): min=1, max=1389.7K, avg=1092.88, stdev=11087.91
clat (usec): min=237, max=1428.7K, avg=50085.97, stdev=79131.65
lat (usec): min=327, max=1429.8K, avg=51179.15, stdev=79835.78
clat percentiles (msec):
| 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 16],
| 30.00th=[ 20], 40.00th=[ 23], 50.00th=[ 26], 60.00th=[ 29],
| 70.00th=[ 34], 80.00th=[ 52], 90.00th=[ 120], 95.00th=[ 192],
| 99.00th=[ 400], 99.50th=[ 486], 99.90th=[ 840], 99.95th=[ 955],
| 99.99th=[ 1401]
bw (KB /s): min= 5, max=24960, per=12.81%, avg=2556.49, stdev=1227.99
lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.08%, 4=0.19%, 10=5.01%, 20=27.76%, 50=46.48%
lat (msec) : 100=7.91%, 250=9.65%, 500=2.45%, 750=0.34%, 1000=0.08%
lat (msec) : 2000=0.04%
cpu : usr=0.20%, sys=1.38%, ctx=563382, majf=0, minf=243
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=2097152/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=8192.0MB, aggrb=19959KB/s, minb=19959KB/s, maxb=19959KB/s, mint=420285msec, maxt=420285msec
Disk stats (read/write):
dm-0: ios=59/2623582, merge=0/0, ticks=1306/53167732, in_queue=53194483, util=99.20%, aggrios=59/2646661, aggrmerge=0/84569, aggrticks=844/50802459, aggrin_queue=50842281, aggrutil=99.18%
sda: ios=59/2646661, merge=0/84569, ticks=844/50802459, in_queue=50842281, util=99.18%
Is there anything to do here? Most configurations are by default.
I would like to use ZFS but seems to be very slow.
Kind regards.
At this moment I’m trying with two hosts with these specs:
· Supermicro Server
· Xeon E3-1270v6
· 2x Intel SSD SC3520 240Gb
· 16 GB DDR4
In both servers I have installed Proxmox 5, in the first server I had configured disks with ext4 (no raid), in the second server I had configure disks with zfs raid 1.
Both servers are updated and using latest kernel.
In both servers I have created a Centos 7 VM with this configuration:
Server1:
bootdisk: scsi0
cores: 8
ide2: none,media=cdrom
memory: 4096
net0: virtio=7A:70:B7:AC:23:5C,bridge=vmbr0
numa: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-1,size=32G
scsihw: virtio-scsi-pci
sockets: 1
Server2:
bootdisk: scsi0
cores: 8
ide2: none,media=cdrom
memory: 4096
net0: virtio=1A:84:03:89:37:4E,bridge=vmbr0
numa: 1
ostype: l26
scsi0: local-zfs:vm-101-disk-1,size=32G
scsihw: virtio-scsi-pci
sockets: 1
I ran fio in both servers:
fio --name=randfile --ioengine=libaio --iodepth=32 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=8 --group_reporting
In the Centos 7 VM on the LVM host, I get 18981 IOPS:
randfile: (groupid=0, jobs=8): err= 0: pid=7071: Fri Sep 1 11:42:11 2017
write: io=8192.0MB, bw=75925KB/s, iops=18981, runt=110485msec
slat (usec): min=1, max=161671, avg=341.20, stdev=3200.12
clat (usec): min=196, max=207266, avg=13059.06, stdev=20009.97
lat (usec): min=199, max=207273, avg=13400.46, stdev=20256.69
clat percentiles (usec):
| 1.00th=[ 1096], 5.00th=[ 2192], 10.00th=[ 3664], 20.00th=[ 4960],
| 30.00th=[ 5728], 40.00th=[ 6496], 50.00th=[ 7200], 60.00th=[ 8256],
| 70.00th=[ 9664], 80.00th=[11840], 90.00th=[22400], 95.00th=[60672],
| 99.00th=[105984], 99.50th=[120320], 99.90th=[152576], 99.95th=[164864],
| 99.99th=[189440]
bw (KB /s): min= 5282, max=117656, per=12.60%, avg=9563.05, stdev=3815.41
lat (usec) : 250=0.01%, 500=0.04%, 750=0.22%, 1000=0.50%
lat (msec) : 2=3.53%, 4=7.59%, 10=60.33%, 20=16.83%, 50=5.15%
lat (msec) : 100=4.44%, 250=1.37%
cpu : usr=0.50%, sys=1.93%, ctx=437658, majf=0, minf=242
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=2097152/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=8192.0MB, aggrb=75925KB/s, minb=75925KB/s, maxb=75925KB/s, mint=110485msec, maxt=110485msec
Disk stats (read/write):
dm-0: ios=0/2614403, merge=0/0, ticks=0/11926981, in_queue=11947665, util=94.13%, aggrios=0/2655908, aggrmerge=0/68964, aggrticks=0/11176185, aggrin_queue=11197603, aggrutil=94.05%
sda: ios=0/2655908, merge=0/68964, ticks=0/11176185, in_queue=11197603, util=94.05%
However in the Centos 7 VM on the ZFS host, I only get 4989 IOPS:
randfile: (groupid=0, jobs=8): err= 0: pid=7184: Fri Sep 1 11:47:26 2017
write: io=8192.0MB, bw=19959KB/s, iops=4989, runt=420285msec
slat (usec): min=1, max=1389.7K, avg=1092.88, stdev=11087.91
clat (usec): min=237, max=1428.7K, avg=50085.97, stdev=79131.65
lat (usec): min=327, max=1429.8K, avg=51179.15, stdev=79835.78
clat percentiles (msec):
| 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 16],
| 30.00th=[ 20], 40.00th=[ 23], 50.00th=[ 26], 60.00th=[ 29],
| 70.00th=[ 34], 80.00th=[ 52], 90.00th=[ 120], 95.00th=[ 192],
| 99.00th=[ 400], 99.50th=[ 486], 99.90th=[ 840], 99.95th=[ 955],
| 99.99th=[ 1401]
bw (KB /s): min= 5, max=24960, per=12.81%, avg=2556.49, stdev=1227.99
lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.08%, 4=0.19%, 10=5.01%, 20=27.76%, 50=46.48%
lat (msec) : 100=7.91%, 250=9.65%, 500=2.45%, 750=0.34%, 1000=0.08%
lat (msec) : 2000=0.04%
cpu : usr=0.20%, sys=1.38%, ctx=563382, majf=0, minf=243
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=2097152/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=8192.0MB, aggrb=19959KB/s, minb=19959KB/s, maxb=19959KB/s, mint=420285msec, maxt=420285msec
Disk stats (read/write):
dm-0: ios=59/2623582, merge=0/0, ticks=1306/53167732, in_queue=53194483, util=99.20%, aggrios=59/2646661, aggrmerge=0/84569, aggrticks=844/50802459, aggrin_queue=50842281, aggrutil=99.18%
sda: ios=59/2646661, merge=0/84569, ticks=844/50802459, in_queue=50842281, util=99.18%
Is there anything to do here? Most configurations are by default.
I would like to use ZFS but seems to be very slow.
Kind regards.
Last edited: