Hi I'm running Proxmox 6.0-11 in my homelab, I've been running into issues with high io wait and low fsyncs, after some googling I managed to sort it on / by adding barrier=0 to it's mount options, however I have 2 more ext4 formatted ssd drives connected (1 sata 1 usb) which do not appear in /etc/fstab and 1 of them still exhibit low fsyncs. The sata one is used as vm storage, the usb is a for daily backup. I understand the implications of the barrier=0 option, the server is backed by ups and in the end it's just a "playground" no data I would miss on it.
/ is a supermicro sata dom 32gb drive connected to internal sata3 dom port
/mnt/pve/vm-backup is sandisk x110 ssd in usb3 external enclosure
/mnt/pve/vm-storage is crucial mx300 ssd connected to internal sata3 port
/etc/fstab
relevant lines from mount output
the pveperf tests were done on idle server with all vm's and containers shut down.
pveperf /
pveperf /mnt/pve/vm-backup
pveperf /mnt/pve/vm-storage
Before adding the barrier=0 to fstab for / fsyncs on / were around 500, similar to what my vm-storage mount shows. However what confuses me is how come both the vm-storage and vm-backup mounts are mounted with same options, but the usb drive shows 6x more fsyncs/sec than the internal ssd.
I tried adding the drive to /etc/fstab manually both by uuid or by /dev/sd* and rebooted the server, but they were still mounted with the same mount options.
So now my questions are:
If I want to change mount options for the drives that are not in /etc/fstab where would I do so?
Are the low fsyncs on the internal ssd drive due to the model of the drive or due to the barrier=0 option not being there?
Thanks for any input!
/ is a supermicro sata dom 32gb drive connected to internal sata3 dom port
/mnt/pve/vm-backup is sandisk x110 ssd in usb3 external enclosure
/mnt/pve/vm-storage is crucial mx300 ssd connected to internal sata3 port
/etc/fstab
Code:
root@pve:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 barrier=0,errors=remount-ro 0 1
UUID=E6D0-BE2A /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
relevant lines from mount output
Code:
root@pve:~# mount | grep ext4
/dev/mapper/pve-root on / type ext4 (rw,relatime,nobarrier,errors=remount-ro)
/dev/sdd1 on /mnt/pve/vm-backup type ext4 (rw,relatime)
/dev/sdb1 on /mnt/pve/vm-storage type ext4 (rw,relatime)
the pveperf tests were done on idle server with all vm's and containers shut down.
pveperf /
Code:
root@pve:~# pveperf /
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3750826
HD SIZE: 7.07 GB (/dev/mapper/pve-root)
BUFFERED READS: 456.85 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND: 2672.89
DNS EXT: 40.82 ms
DNS INT: 2.79 ms (lan)
pveperf /mnt/pve/vm-backup
Code:
root@pve:~# pveperf /mnt/pve/vm-backup
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3742463
HD SIZE: 233.73 GB (/dev/sdd1)
BUFFERED READS: 363.87 MB/sec
AVERAGE SEEK TIME: 0.31 ms
FSYNCS/SECOND: 2402.12
DNS EXT: 33.57 ms
DNS INT: 3.81 ms (lan)
pveperf /mnt/pve/vm-storage
Code:
root@pve:~# pveperf /mnt/pve/vm-storage
CPU BOGOMIPS: 24000.00
REGEX/SECOND: 3744982
HD SIZE: 686.67 GB (/dev/sdb1)
BUFFERED READS: 431.09 MB/sec
AVERAGE SEEK TIME: 0.10 ms
FSYNCS/SECOND: 437.49
DNS EXT: 39.03 ms
DNS INT: 3.94 ms (lan)
Before adding the barrier=0 to fstab for / fsyncs on / were around 500, similar to what my vm-storage mount shows. However what confuses me is how come both the vm-storage and vm-backup mounts are mounted with same options, but the usb drive shows 6x more fsyncs/sec than the internal ssd.
I tried adding the drive to /etc/fstab manually both by uuid or by /dev/sd* and rebooted the server, but they were still mounted with the same mount options.
So now my questions are:
If I want to change mount options for the drives that are not in /etc/fstab where would I do so?
Are the low fsyncs on the internal ssd drive due to the model of the drive or due to the barrier=0 option not being there?
Thanks for any input!
Last edited: