Disk IO Issue

techadmin

New Member
Hello,

We have configured proxmox with 3 Node cluster. We are getting some different result when we check Disk IO
First Node:
========
root@pve1:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.2207 s, 105 MB/s
root@pve1:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.1546 s, 106 MB/s
root@pve1:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.1819 s, 105 MB/s

Second Node:
==========
root@pve2:~#
root@pve2:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.75524 s, 286 MB/s
root@pve2:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.77761 s, 284 MB/s
root@pve2:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.69443 s, 291 MB/s


Third Node:
=========

root@pve3:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.63718 s, 141 MB/s
root@pve3:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.67387 s, 140 MB/s
root@pve3:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.03476 s, 266 MB/s

But on vps its showing various output of


# uptime
11:19:51 up 29 days, 1:05, 2 users, load average: 0.00, 0.01, 0.05

# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.04268 s, 353 MB/s

=====================

VM ID: 278

# uptime
11:42:21 up 29 days, 1:27, 1 user, load average: 0.05, 0.03, 0.05

# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.57731 s, 300 MB/s

==================

VM ID: 310

# uptime
12:45:05 up 28 days, 1:54, 2 users, load average: 0.00, 0.01, 0.05

# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 2.26059 s, 475 MB/s

=====================

VM ID: 100

# uptime
12:47:13 up 9 days, 1:04, 1 user, load average: 0.00, 0.02, 0.05

# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 7.53869 s, 142 MB/s

=====================
VM ID: 102
# uptime
12:49:04 up 9 days, 1:04, 1 user, load average: 0.08, 0.03, 0.05

# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 12.3638 s, 86.8 MB/s



Why there is disk speed are different in cluster vps like 475 142 86


Any Ideas?
 
root@pve1:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
That is not a benchmark. Writing zeros is a heavily optimized task that does not at all show any real-world information.

I'd recommend to use 'fio' for reliable disk benchmarking, both on the host and the VMs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!