Hi, I have a Proxmox server - and it's write speeds are very slow, any help would be extremely appreciated
two SSD disks , and the rest of disks
# pveperf
CPU BOGOMIPS: 120025.56
REGEX/SECOND: 640646
HD SIZE: 171.45 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 3943.74
# sync; dd if=/dev/zero of=/tmp/temp bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 16.1557 s, 66.5 MB/s
hdparm -tT /dev/sda (ssd)
/dev/sda:
Timing cached reads: 3848 MB in 2.00 seconds = 1926.72 MB/sec
Timing buffered disk reads: 414 MB in 3.00 seconds = 137.85 MB/sec
# hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads:
4646 MB in 2.01 seconds = 2316.19 MB/sec
Timing buffered disk reads: 348 MB in 3.00 seconds = 115.84 MB/sec
3.7TB disk:
/dev/sde:
Timing cached reads: 3516 MB in 2.00 seconds = 1761.39 MB/sec
Timing buffered disk reads: 288 MB in 3.02 seconds = 95.49 MB/sec
sync; dd if=/dev/zero of=/zfs2/test2 bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.47 s, 69.4 MB/s
zpool get all | grep ashift
zfs2 ashift 12 local
rpool ashift 12 local
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=/tmp/test --bs=4k --iodepth=64 --size=200MB --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
Jobs: 1 (f=1): [m(1)] [97.9% done] [11314KB/3906KB/0KB /s] [2828/976/0 iops] [eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=28784: Wed Jan 27 21:51:12 2021
read : io=153888KB, bw=3403.5KB/s, iops=850, runt= 45216msec
write: io=50912KB, bw=1125.1KB/s, iops=281, runt= 45216msec
cpu : usr=2.78%, sys=83.27%, ctx=2686, majf=0, minf=19
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=38472/w=12728/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=153888KB, aggrb=3403KB/s, minb=3403KB/s, maxb=3403KB/s, mint=45216msec, maxt=45216msec
WRITE: io=50912KB, aggrb=1125KB/s, minb=1125KB/s, maxb=1125KB/s, mint=45216msec, maxt=45216msec
two SSD disks , and the rest of disks
# pveperf
CPU BOGOMIPS: 120025.56
REGEX/SECOND: 640646
HD SIZE: 171.45 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 3943.74
# sync; dd if=/dev/zero of=/tmp/temp bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 16.1557 s, 66.5 MB/s
hdparm -tT /dev/sda (ssd)
/dev/sda:
Timing cached reads: 3848 MB in 2.00 seconds = 1926.72 MB/sec
Timing buffered disk reads: 414 MB in 3.00 seconds = 137.85 MB/sec
# hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads:
4646 MB in 2.01 seconds = 2316.19 MB/sec
Timing buffered disk reads: 348 MB in 3.00 seconds = 115.84 MB/sec
3.7TB disk:
/dev/sde:
Timing cached reads: 3516 MB in 2.00 seconds = 1761.39 MB/sec
Timing buffered disk reads: 288 MB in 3.02 seconds = 95.49 MB/sec
sync; dd if=/dev/zero of=/zfs2/test2 bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.47 s, 69.4 MB/s
zpool get all | grep ashift
zfs2 ashift 12 local
rpool ashift 12 local
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=/tmp/test --bs=4k --iodepth=64 --size=200MB --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
Jobs: 1 (f=1): [m(1)] [97.9% done] [11314KB/3906KB/0KB /s] [2828/976/0 iops] [eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=28784: Wed Jan 27 21:51:12 2021
read : io=153888KB, bw=3403.5KB/s, iops=850, runt= 45216msec
write: io=50912KB, bw=1125.1KB/s, iops=281, runt= 45216msec
cpu : usr=2.78%, sys=83.27%, ctx=2686, majf=0, minf=19
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=38472/w=12728/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=153888KB, aggrb=3403KB/s, minb=3403KB/s, maxb=3403KB/s, mint=45216msec, maxt=45216msec
WRITE: io=50912KB, aggrb=1125KB/s, minb=1125KB/s, maxb=1125KB/s, mint=45216msec, maxt=45216msec
Last edited: