After some struggling to get our setup running over two NVMe PCIe x4 units (https://forum.proxmox.com/threads/i-o-errors-with-nvm-drives.64974/#post-294108)  and although we are reasonably happy with the system I've just discovered that the performance returned by pveperf /FSYNCS/SECOND are sometimes relatively bad:
	
	
	
		
The values can fluctuate from 180 to 500 from time to time but tend to stay on the lower end.
root is running over a Linux raid0 on top of the two NVMe which is the PV of the PVE VG where root is.
I am assuming the old recommendation of "not using" ext4" is obsolete, considering as I understand the quicker degradation of SSD units in ext3 vs ext4 so our mount is pretty straight forward and as the original installer lefted:
/dev/pve/root / ext4 errors=remount-ro 0 1
The value for a raid0 over the same two NVM units:
	
	
	
		
which could also be related to the improved mount:
/dev/mapper/data-tmp /tmp ext4 nofail,noatime,data=writeback,barrier=0,errors=continue 0 0
But the most extraordinary is that the same done over a plain standard SSD raid1 (build through LVM instead of mdadm + LVM)
	
	
	
		
I was hoping to test the behaviour inside of any of the containers (LVM thin) but not sure if that can be done.
Not sure what I am missing or what is misbehaving on my root setup as I was expecting much better results than that on the pve root.
Also for the record on a remote server with 2 standard (x3) NVM units with same mdadm+lvm PVE VG the results are:
	
	
	
		
So yes, I am assuming something not working as expected.
				
			
		Code:
	
	pveperf
CPU BOGOMIPS:      114986.72
REGEX/SECOND:      4106212
HD SIZE:           41.22 GB (/dev/mapper/pve-root)
BUFFERED READS:    2663.07 MB/sec
AVERAGE SEEK TIME: 0.06 ms
FSYNCS/SECOND:     210.66
DNS EXT:           546.76 ms
DNS INT:           614.38 ms (alsur.es)
	The values can fluctuate from 180 to 500 from time to time but tend to stay on the lower end.
root is running over a Linux raid0 on top of the two NVMe which is the PV of the PVE VG where root is.
I am assuming the old recommendation of "not using" ext4" is obsolete, considering as I understand the quicker degradation of SSD units in ext3 vs ext4 so our mount is pretty straight forward and as the original installer lefted:
/dev/pve/root / ext4 errors=remount-ro 0 1
The value for a raid0 over the same two NVM units:
		Code:
	
	pveperf /tmp/
CPU BOGOMIPS:      114986.72
REGEX/SECOND:      3972371
HD SIZE:           14.70 GB (/dev/mapper/data-tmp)
BUFFERED READS:    2878.57 MB/sec
AVERAGE SEEK TIME: 0.02 ms
FSYNCS/SECOND:     11633.78
DNS EXT:           618.83 ms
DNS INT:           657.43 ms (alsur.es)
	which could also be related to the improved mount:
/dev/mapper/data-tmp /tmp ext4 nofail,noatime,data=writeback,barrier=0,errors=continue 0 0
But the most extraordinary is that the same done over a plain standard SSD raid1 (build through LVM instead of mdadm + LVM)
		Code:
	
	pveperf /home2/serverxxxx/VIDEO/
CPU BOGOMIPS:      114986.72
REGEX/SECOND:      4086728
HD SIZE:           195.86 GB (/dev/mapper/data-video)
BUFFERED READS:    535.05 MB/sec
AVERAGE SEEK TIME: 0.19 ms
FSYNCS/SECOND:     1077.81
DNS EXT:           551.59 ms
DNS INT:           663.95 ms (alsur.es)
	I was hoping to test the behaviour inside of any of the containers (LVM thin) but not sure if that can be done.
Not sure what I am missing or what is misbehaving on my root setup as I was expecting much better results than that on the pve root.
Also for the record on a remote server with 2 standard (x3) NVM units with same mdadm+lvm PVE VG the results are:
		Code:
	
	pveperf
CPU BOGOMIPS:      60672.00
REGEX/SECOND:      3943638
HD SIZE:           33.52 GB (/dev/md2)
BUFFERED READS:    407.30 MB/sec
AVERAGE SEEK TIME: 0.09 ms
FSYNCS/SECOND:     10260.04
DNS EXT:           36.70 ms
	
			
				Last edited: