Extremely slow write speeds

Saulius

Active Member
Oct 30, 2017
9
0
41
43
Hi, I have a Proxmox server - and it's write speeds are very slow, any help would be extremely appreciated

two SSD disks , and the rest of disks
# pveperf
CPU BOGOMIPS: 120025.56
REGEX/SECOND: 640646
HD SIZE: 171.45 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 3943.74


# sync; dd if=/dev/zero of=/tmp/temp bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 16.1557 s, 66.5 MB/s


hdparm -tT /dev/sda (ssd)

/dev/sda:
Timing cached reads: 3848 MB in 2.00 seconds = 1926.72 MB/sec
Timing buffered disk reads: 414 MB in 3.00 seconds = 137.85 MB/sec


# hdparm -tT /dev/sdb

/dev/sdb:
Timing cached reads:
4646 MB in 2.01 seconds = 2316.19 MB/sec
Timing buffered disk reads: 348 MB in 3.00 seconds = 115.84 MB/sec

3.7TB disk:
/dev/sde:
Timing cached reads: 3516 MB in 2.00 seconds = 1761.39 MB/sec
Timing buffered disk reads: 288 MB in 3.02 seconds = 95.49 MB/sec


sync; dd if=/dev/zero of=/zfs2/test2 bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.47 s, 69.4 MB/s


zpool get all | grep ashift
zfs2 ashift 12 local
rpool ashift 12 local
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=/tmp/test --bs=4k --iodepth=64 --size=200MB --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
Jobs: 1 (f=1): [m(1)] [97.9% done] [11314KB/3906KB/0KB /s] [2828/976/0 iops] [eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=28784: Wed Jan 27 21:51:12 2021
read : io=153888KB, bw=3403.5KB/s, iops=850, runt= 45216msec
write: io=50912KB, bw=1125.1KB/s, iops=281, runt= 45216msec
cpu : usr=2.78%, sys=83.27%, ctx=2686, majf=0, minf=19
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=38472/w=12728/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=153888KB, aggrb=3403KB/s, minb=3403KB/s, maxb=3403KB/s, mint=45216msec, maxt=45216msec
WRITE: io=50912KB, aggrb=1125KB/s, minb=1125KB/s, maxb=1125KB/s, mint=45216msec, maxt=45216msec
 
Last edited:
And please tell the forum which SSD model you use.
 
Raid Controller: product: MegaRAID SAS 2008 [Falcon]
vendor: LSI Logic / Symbios Logic

SSD drive 200 GB (MZ-5EA2000-0D3 )

and 7 4TB Seagate disks (Seagate 4TB Enterprise Capacity HDD 7200RPM SATA 6Gbps 128 MB Cache Internal Bare Drive (ST4000NM0033)
 
What i found regarding this SSD:

Code:
− Host transfer rate: 300 MB/s
− Sustained Data Read : 250MB/s
− Sustained Data Write : 220MB/s (110MB/s for 100GB)
− Random Read IOPS : 43K IOPS
− Random Write IOPS : 11K IOPS (5.5K IOPS for 100GB)

Try to disable Write Cache on this SSD, some Enterprise SSD maybe perform better without it.
You can do it via "hdparm -W0 /dev/sda" and enable it again with "hdparm -W1 /dev/sda"

Keep in mind, this is not a persisten Setting, if you reboot its enabled again.
 
I ran hdparm -W0 /dev/sda -- and note same issue of slow speeds are on the non SSD drives ( Seagate 4TB Enterprise Capacity HDD 7200RPM SATA 6Gbps 128 MB Cache Internal Bare Drive (ST4000NM0033))

fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=/tmp/test --bs=4k --iodepth=64 --size=200MB --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 200MB)
Jobs: 1 (f=1): [m(1)] [97.1% done] [14320KB/4800KB/0KB /s] [3580/1200/0 iops] [eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=9957: Thu Jan 28 06:17:25 2021
read : io=153888KB, bw=4826.7KB/s, iops=1206, runt= 31883msec
write: io=50912KB, bw=1596.9KB/s, iops=399, runt= 31883msec
cpu : usr=2.09%, sys=86.36%, ctx=2632, majf=0, minf=15
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=38472/w=12728/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=153888KB, aggrb=4826KB/s, minb=4826KB/s, maxb=4826KB/s, mint=31883msec, maxt=31883msec
WRITE: io=50912KB, aggrb=1596KB/s, minb=1596KB/s, maxb=1596KB/s, mint=31883msec, maxt=31883msec
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
 
Your SSD is a bit faster without Cache, as you can see on the IOPS.

and note same issue of slow speeds are on the non SSD drives ( Seagate 4TB Enterprise Capacity HDD 7200RPM SATA 6Gbps 128 MB Cache Internal Bare Drive (ST4000NM0033))
What a speed you are expecting from the HDDs? For me your Results are looking fine for HDDs.

fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=/tmp/test --bs=4k --iodepth=64 --size=200MB --readwrite=randrw --rwmixread=75
For a better fio Result, i recommend to use a 4GB Temp file.
 
On a different proxmox server -- my speeds are significantly faster:

SSD: INTEL SSDSC2BB12 120GB

− Sustained Data Read : 445MB/s
− Sustained Data Write : 135MB/s


differentproxmox# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=/tmp/test --bs=4k --iodepth=64 --size=200MB --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 200MB)

test: (groupid=0, jobs=1): err= 0: pid=23231: Thu Jan 28 10:08:25 2021
read : io=153888KB, bw=334539KB/s, iops=83634, runt= 460msec
write: io=50912KB, bw=110678KB/s, iops=27669, runt= 460msec
cpu : usr=20.70%, sys=76.03%, ctx=136, majf=7, minf=6
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=38472/w=12728/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=153888KB, aggrb=334539KB/s, minb=334539KB/s, maxb=334539KB/s, mint=460msec, maxt=460msec
WRITE: io=50912KB, aggrb=110678KB/s, minb=110678KB/s, maxb=110678KB/s, mint=460msec, maxt=460msec

// fio on ST2000NM0011 2TB non ssd (first server is 4TB - they are both 7200 RPM SATA 6.0Gb/s )
fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=/zfs1/test2 --bs=4k --iodepth=64 --size=200MB --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 200MB)

test: (groupid=0, jobs=1): err= 0: pid=25009: Thu Jan 28 10:13:55 2021
read : io=153888KB, bw=432270KB/s, iops=108067, runt= 356msec
write: io=50912KB, bw=143011KB/s, iops=35752, runt= 356msec
cpu : usr=14.37%, sys=85.07%, ctx=1, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=38472/w=12728/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: io=153888KB, aggrb=432269KB/s, minb=432269KB/s, maxb=432269KB/s, mint=356msec, maxt=356msec
WRITE: io=50912KB, aggrb=143011KB/s, minb=143011KB/s, maxb=143011KB/s, mint=356msec, maxt=356msec

differentproxmox:
hdparm -tT /dev/sda

/dev/sda:

Timing cached reads: ']16130 MB in 1.99 seconds = 8088.97 MB/sec
Timing buffered disk reads: 836 MB in 3.00 seconds = 278.34 MB/sec

// 2TB non SSDd drive
hdparm -tT /dev/sde
/dev/sde:
Timing cached reads:
16084 MB in 1.99 seconds = 8064.63 MB/sec
Timing buffered disk reads: 370 MB in 3.05 seconds = 121.24 MB/sec

-- Note -- my main issue is that backups are taking forever 3-5 MB/sec (lzo to /zfs2 )
 
For me your Benchmarks seems okay, if you mean, that is not the case, then please give us more Information. So tell us exactly what System do you use, how the disks connected and configured, what SMART Values the have and so on.

If your Backups taking to long, then it seems to be a problem on the target not on the source - because your Benchmarks seems not really bad.

Tell us more about your Backup System and your Job on the PVE Side.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!