Hello.
I have pve 6.0-7 on DELL R730
PVE installed on raid1 hdd
and have ssd raid1 for mssql VMs and sql data
SSD model: 2 x SSDSC2KB480G8R Dell Certified Intel S4x00/D3-S4x10 Series SSDs (Intel d3 s4510 480Gb)
On ssd raid1 created gpt partition in ext4
Created Windows 2016 Standart VM as described in Proxmox tutorials on official wiki pages (performance guide and etc, raw, paravirt and so on):
Test in CrystalDiskMark show very slow 4k random read and write (((
On HDD VMs I have better speed than on SSD.
On the host system performance is normal.
Have tried on LVM volume over ssd raid with the same result as directory.
What can I do to get back performance of ssd to normal state on Windows VM?
I have pve 6.0-7 on DELL R730
PVE installed on raid1 hdd
and have ssd raid1 for mssql VMs and sql data
SSD model: 2 x SSDSC2KB480G8R Dell Certified Intel S4x00/D3-S4x10 Series SSDs (Intel d3 s4510 480Gb)
# megacli -LDinfo -Lall -aALL
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :sysraid1hdd
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 3.637 TB
Sector Size : 512
Is VD emulated : No
Mirror Data : 3.637 TB
State : Optimal
Strip Size : 256 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, Write Cache OK if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, Write Cache OK if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Enabled
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: No
Virtual Drive: 1 (Target Id: 1)
Name :dbssd
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 401.940 GB
Sector Size : 512
Is VD emulated : Yes
Mirror Data : 401.940 GB
State : Optimal
Strip Size : 64 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Enabled
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: No
LD has drives that support T10 power conditions: No
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: No
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :sysraid1hdd
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 3.637 TB
Sector Size : 512
Is VD emulated : No
Mirror Data : 3.637 TB
State : Optimal
Strip Size : 256 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteBack, ReadAhead, Direct, Write Cache OK if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Direct, Write Cache OK if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Enabled
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: No
Virtual Drive: 1 (Target Id: 1)
Name :dbssd
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 401.940 GB
Sector Size : 512
Is VD emulated : Yes
Mirror Data : 401.940 GB
State : Optimal
Strip Size : 64 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Enabled
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: No
LD has drives that support T10 power conditions: No
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: No
On ssd raid1 created gpt partition in ext4
# parted /dev/sdb print
Model: DELL PERC H730 Mini (scsi)
Disk /dev/sdb: 432GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 432GB 432GB ext4 primary
# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,vztmpl,iso
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
dir: ssddir
path /mnt/ssd-lin/images
content rootdir,images
shared 0
Created Windows 2016 Standart VM as described in Proxmox tutorials on official wiki pages (performance guide and etc, raw, paravirt and so on):
# cat /etc/pve/qemu-server/101.conf
agent: 1
bootdisk: scsi0
cores: 2
cpu: host
ide0: local:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: local:iso/en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso,media=cdrom
memory: 8192
name: win-ssddir
net0: virtio=7A:50:73:82:CE:38,bridge=vmbr1,firewall=1
numa: 1
ostype: win10
scsi0: ssddir:101/vm-101-disk-0.raw,cache=writeback,discard=on,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=cab7bd37-7015-40d6-bb2a-c7dd9c097b66
sockets: 1
vmgenid: cf54452a-7a3d-4b4a-af00-634576b7a7c7
agent: 1
bootdisk: scsi0
cores: 2
cpu: host
ide0: local:iso/virtio-win-0.1.171.iso,media=cdrom,size=363020K
ide2: local:iso/en_windows_server_2016_updated_feb_2018_x64_dvd_11636692.iso,media=cdrom
memory: 8192
name: win-ssddir
net0: virtio=7A:50:73:82:CE:38,bridge=vmbr1,firewall=1
numa: 1
ostype: win10
scsi0: ssddir:101/vm-101-disk-0.raw,cache=writeback,discard=on,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=cab7bd37-7015-40d6-bb2a-c7dd9c097b66
sockets: 1
vmgenid: cf54452a-7a3d-4b4a-af00-634576b7a7c7
Test in CrystalDiskMark show very slow 4k random read and write (((
On HDD VMs I have better speed than on SSD.
On the host system performance is normal.
pveperf /mnt/ssd-lin/
CPU BOGOMIPS: 134418.40
REGEX/SECOND: 2137222
HD SIZE: 394.63 GB (/dev/sdb1)
BUFFERED READS: 353.10 MB/sec
AVERAGE SEEK TIME: 0.13 ms
FSYNCS/SECOND: 3293.86
DNS EXT: 14.72 ms
DNS INT: 7.95 ms (server.com)
CPU BOGOMIPS: 134418.40
REGEX/SECOND: 2137222
HD SIZE: 394.63 GB (/dev/sdb1)
BUFFERED READS: 353.10 MB/sec
AVERAGE SEEK TIME: 0.13 ms
FSYNCS/SECOND: 3293.86
DNS EXT: 14.72 ms
DNS INT: 7.95 ms (server.com)
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test4kreadwrite --filename=/mnt/ssd-lin/4test.raw --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test4kreadwrite: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.12
Starting 1 process
test4kreadwrite: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=270MiB/s,w=89.1MiB/s][r=69.1k,w=22.8k IOPS][eta 00m:00s]
test4kreadwrite: (groupid=0, jobs=1): err= 0: pid=8932: Mon Sep 30 20:33:16 2019
read: IOPS=71.5k, BW=279MiB/s (293MB/s)(3070MiB/10990msec)
bw ( KiB/s): min=208904, max=331560, per=100.00%, avg=289717.33, stdev=46596.04, samples=21
iops : min=52226, max=82890, avg=72429.33, stdev=11649.01, samples=21
write: IOPS=23.9k, BW=93.4MiB/s (97.9MB/s)(1026MiB/10990msec); 0 zone resets
bw ( KiB/s): min=68944, max=110672, per=100.00%, avg=96882.67, stdev=15946.85, samples=21
iops : min=17236, max=27668, avg=24220.67, stdev=3986.71, samples=21
cpu : usr=13.60%, sys=80.33%, ctx=63044, majf=0, minf=11
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=279MiB/s (293MB/s), 279MiB/s-279MiB/s (293MB/s-293MB/s), io=3070MiB (3219MB), run=10990-10990msec
WRITE: bw=93.4MiB/s (97.9MB/s), 93.4MiB/s-93.4MiB/s (97.9MB/s-97.9MB/s), io=1026MiB (1076MB), run=10990-10990msec
Disk stats (read/write):
sdb: ios=779122/260515, merge=0/27, ticks=381438/59662, in_queue=0, util=99.17%
**********************************
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test4kreadwrite --filename=/mnt/ssd-lin/1g-test.raw --bs=4k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=75
test4kreadwrite: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.12
Starting 1 process
test4kreadwrite: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=316MiB/s,w=106MiB/s][r=80.8k,w=27.1k IOPS][eta 00m:00s]
test4kreadwrite: (groupid=0, jobs=1): err= 0: pid=15812: Mon Sep 30 21:08:18 2019
read: IOPS=64.7k, BW=253MiB/s (265MB/s)(768MiB/3037msec)
bw ( KiB/s): min=212008, max=328528, per=99.77%, avg=258214.67, stdev=54747.00, samples=6
iops : min=53002, max=82132, avg=64553.67, stdev=13686.75, samples=6
write: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(256MiB/3037msec); 0 zone resets
bw ( KiB/s): min=70832, max=110728, per=99.79%, avg=86278.67, stdev=18693.19, samples=6
iops : min=17708, max=27682, avg=21569.67, stdev=4673.30, samples=6
cpu : usr=13.41%, sys=81.46%, ctx=14246, majf=0, minf=115
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=196498,65646,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=253MiB/s (265MB/s), 253MiB/s-253MiB/s (265MB/s-265MB/s), io=768MiB (805MB), run=3037-3037msec
WRITE: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=256MiB (269MB), run=3037-3037msec
Disk stats (read/write):
sdb: ios=185452/61913, merge=0/1, ticks=76622/11295, in_queue=0, util=96.74%
test4kreadwrite: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.12
Starting 1 process
test4kreadwrite: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=270MiB/s,w=89.1MiB/s][r=69.1k,w=22.8k IOPS][eta 00m:00s]
test4kreadwrite: (groupid=0, jobs=1): err= 0: pid=8932: Mon Sep 30 20:33:16 2019
read: IOPS=71.5k, BW=279MiB/s (293MB/s)(3070MiB/10990msec)
bw ( KiB/s): min=208904, max=331560, per=100.00%, avg=289717.33, stdev=46596.04, samples=21
iops : min=52226, max=82890, avg=72429.33, stdev=11649.01, samples=21
write: IOPS=23.9k, BW=93.4MiB/s (97.9MB/s)(1026MiB/10990msec); 0 zone resets
bw ( KiB/s): min=68944, max=110672, per=100.00%, avg=96882.67, stdev=15946.85, samples=21
iops : min=17236, max=27668, avg=24220.67, stdev=3986.71, samples=21
cpu : usr=13.60%, sys=80.33%, ctx=63044, majf=0, minf=11
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=279MiB/s (293MB/s), 279MiB/s-279MiB/s (293MB/s-293MB/s), io=3070MiB (3219MB), run=10990-10990msec
WRITE: bw=93.4MiB/s (97.9MB/s), 93.4MiB/s-93.4MiB/s (97.9MB/s-97.9MB/s), io=1026MiB (1076MB), run=10990-10990msec
Disk stats (read/write):
sdb: ios=779122/260515, merge=0/27, ticks=381438/59662, in_queue=0, util=99.17%
**********************************
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test4kreadwrite --filename=/mnt/ssd-lin/1g-test.raw --bs=4k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=75
test4kreadwrite: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.12
Starting 1 process
test4kreadwrite: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=316MiB/s,w=106MiB/s][r=80.8k,w=27.1k IOPS][eta 00m:00s]
test4kreadwrite: (groupid=0, jobs=1): err= 0: pid=15812: Mon Sep 30 21:08:18 2019
read: IOPS=64.7k, BW=253MiB/s (265MB/s)(768MiB/3037msec)
bw ( KiB/s): min=212008, max=328528, per=99.77%, avg=258214.67, stdev=54747.00, samples=6
iops : min=53002, max=82132, avg=64553.67, stdev=13686.75, samples=6
write: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(256MiB/3037msec); 0 zone resets
bw ( KiB/s): min=70832, max=110728, per=99.79%, avg=86278.67, stdev=18693.19, samples=6
iops : min=17708, max=27682, avg=21569.67, stdev=4673.30, samples=6
cpu : usr=13.41%, sys=81.46%, ctx=14246, majf=0, minf=115
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=196498,65646,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=253MiB/s (265MB/s), 253MiB/s-253MiB/s (265MB/s-265MB/s), io=768MiB (805MB), run=3037-3037msec
WRITE: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=256MiB (269MB), run=3037-3037msec
Disk stats (read/write):
sdb: ios=185452/61913, merge=0/1, ticks=76622/11295, in_queue=0, util=96.74%
Have tried on LVM volume over ssd raid with the same result as directory.
What can I do to get back performance of ssd to normal state on Windows VM?
Last edited: