[SOLVED] How to change mount options for drives that are not in fstab?

Nov 17, 2019
27
0
21
Hi I'm running Proxmox 6.0-11 in my homelab, I've been running into issues with high io wait and low fsyncs, after some googling I managed to sort it on / by adding barrier=0 to it's mount options, however I have 2 more ext4 formatted ssd drives connected (1 sata 1 usb) which do not appear in /etc/fstab and 1 of them still exhibit low fsyncs. The sata one is used as vm storage, the usb is a for daily backup. I understand the implications of the barrier=0 option, the server is backed by ups and in the end it's just a "playground" no data I would miss on it.

/ is a supermicro sata dom 32gb drive connected to internal sata3 dom port
/mnt/pve/vm-backup is sandisk x110 ssd in usb3 external enclosure
/mnt/pve/vm-storage is crucial mx300 ssd connected to internal sata3 port

/etc/fstab
Code:
root@pve:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 barrier=0,errors=remount-ro 0 1
UUID=E6D0-BE2A /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

relevant lines from mount output
Code:
root@pve:~# mount | grep ext4
/dev/mapper/pve-root on / type ext4 (rw,relatime,nobarrier,errors=remount-ro)
/dev/sdd1 on /mnt/pve/vm-backup type ext4 (rw,relatime)
/dev/sdb1 on /mnt/pve/vm-storage type ext4 (rw,relatime)


the pveperf tests were done on idle server with all vm's and containers shut down.

pveperf /
Code:
root@pve:~# pveperf /
CPU BOGOMIPS:      24000.00
REGEX/SECOND:      3750826
HD SIZE:           7.07 GB (/dev/mapper/pve-root)
BUFFERED READS:    456.85 MB/sec
AVERAGE SEEK TIME: 0.17 ms
FSYNCS/SECOND:     2672.89
DNS EXT:           40.82 ms
DNS INT:           2.79 ms (lan)

pveperf /mnt/pve/vm-backup
Code:
root@pve:~# pveperf /mnt/pve/vm-backup
CPU BOGOMIPS:      24000.00
REGEX/SECOND:      3742463
HD SIZE:           233.73 GB (/dev/sdd1)
BUFFERED READS:    363.87 MB/sec
AVERAGE SEEK TIME: 0.31 ms
FSYNCS/SECOND:     2402.12
DNS EXT:           33.57 ms
DNS INT:           3.81 ms (lan)

pveperf /mnt/pve/vm-storage
Code:
root@pve:~# pveperf /mnt/pve/vm-storage
CPU BOGOMIPS:      24000.00
REGEX/SECOND:      3744982
HD SIZE:           686.67 GB (/dev/sdb1)
BUFFERED READS:    431.09 MB/sec
AVERAGE SEEK TIME: 0.10 ms
FSYNCS/SECOND:     437.49
DNS EXT:           39.03 ms
DNS INT:           3.94 ms (lan)

Before adding the barrier=0 to fstab for / fsyncs on / were around 500, similar to what my vm-storage mount shows. However what confuses me is how come both the vm-storage and vm-backup mounts are mounted with same options, but the usb drive shows 6x more fsyncs/sec than the internal ssd.

I tried adding the drive to /etc/fstab manually both by uuid or by /dev/sd* and rebooted the server, but they were still mounted with the same mount options.

So now my questions are:
If I want to change mount options for the drives that are not in /etc/fstab where would I do so?
Are the low fsyncs on the internal ssd drive due to the model of the drive or due to the barrier=0 option not being there?

Thanks for any input!
 
Last edited:
Hi,
/ by adding barrier=0
This is dangerous and can corrupt your FS in the worst case.
In case that you ssd you should use mount options like discard and atime.

If you provision a storage with the PVE GUI the mount is done by systemd.
/etc/systemd/system/mnt-pve-<storagename>
 
Thanks,
I added barrier=0 to relevant file in /etc/systemd/system/ and my fsyncs on vm-storage skyrocketed 10times (while running all the vms and containers):
Code:
root@pve:~# pveperf /mnt/pve/vm-storage/
CPU BOGOMIPS:      24000.00
REGEX/SECOND:      3116234
HD SIZE:           686.67 GB (/dev/sdb1)
BUFFERED READS:    405.06 MB/sec
AVERAGE SEEK TIME: 0.11 ms
FSYNCS/SECOND:     5325.62
DNS EXT:           37.06 ms
DNS INT:           2.60 ms (lan)

I'm a layman when it comes to my homelab / linux etc. All I know back on older proxmox on ext3 with same hardware I had good fsyncs then the defaults for mounts in later kernel changed or whatnot and barrier became default and my fsyncs tanked. I googled around, but as with most information related to linux I came up with obscure overly technical explanation that left me more confused than before. I might misunderstand the problematic, but to me it seemed all it can cause is file corruption on power loss, which is a nonissue for me.

Bottom line is IT and my homelab are my hobby not my job, and in my playground I prefer performance over enterprise-grade data protection. Worst case scenario I restore the daily backup and lose 1 day worth of graylog and pihole statistics.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!