FSYNCS/S performance

Ting

Member
Oct 19, 2021
94
4
13
56
Hi,

I have a 4 nodes cluster proxmox7, two of them has sas 15k disk as proxmox disk, and other tow has ssd as proxmox disks, but, I am running ceph on these 4 nodes, and all my VMs are on ceph storage. I did pveperf test using command "pveperf", wondering since I am running all VMs on ceph, then how important is FSYUNCS/S is to my system. Because, two sas disk nodes only give me 150 /s, and other two ssd system disk nodes, give me 900/s.
 
one more thing, all 4 nodes running zfs (raid 1) on system disk (2 disks each node)
 
Would you mind to elaborate your disks layout on each server? Its unclear which disks are used for CEPH and which are used as zfs for Proxmox OS.

In any case, if you mix disks with different performance in the same pool, expect the pool to perform as fast as the slowest disk type in it.
 
Thanks for your reply, here is my layout:

1. node #1, sda & sdb = zfs-1 (both sas 15k disk running proxmox), sdc = 1tb ssd OSD
2. node #2, sda & sdb = zfs-1 (both sas 15k disk running proxmox), sdc = 1tb ssd OSD
3. node #3, sda & sdb = zfs-1 (both ssd disk running proxmox), sdc = 1tb ssd OSD
4. node #4, sda & sdb = zfs-1 (bothssd disk running proxmox), sdc = 1tb ssd OSD
 
Thanks for your reply, here is my layout:

1. node #1, sda & sdb = zfs-1 (both sas 15k disk running proxmox), sdc = 1tb ssd OSD
2. node #2, sda & sdb = zfs-1 (both sas 15k disk running proxmox), sdc = 1tb ssd OSD
3. node #3, sda & sdb = zfs-1 (both ssd disk running proxmox), sdc = 1tb ssd OSD
4. node #4, sda & sdb = zfs-1 (bothssd disk running proxmox), sdc = 1tb ssd OSD
VMs are using the storage pool from those 4 ssd OSD ceph. zfs-1 disks only running proxmox os, not involve in any VMs pool.
 
From the pveperf man page:

Code:
SYNOPSIS
       pveperf [PATH]

DESCRIPTION
       Tries to gather some CPU/hard disk performance data on the hard disk mounted at PATH (/ is used as default):

So you are getting a bench from the path you specify or "/" if none is set, which belong to that zfs-raid1. As you have spindles on two servers and SSD on the other two, you get different numbers on each.

As you are using other SSD for the OSD's, the performance of the VMs is related to those disks only and won't be influenced by the zfs disks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!