Slow FSYNCS on brand new instal

riri1310

Active Member
Jul 14, 2017
27
0
41
49
I just bought a new server and install proxmox 5.2-10 but I have a lot of IO delay and don't know how to improve FSYNCS!

Could you help please?

Here are some informations, tell me if you need something else :

Server info :

Intel Xeon E3-1270v6
4c/8t - 3,8GHz /4,2GHz
64Go DDR4 ECC 2400 MHz
SoftRaid 2x2To SATA
500 Mbps bande passante
vRack : 100 Mbps

Code:
root@ns3091370:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.18-5-pve)
pve-manager: 5.2-10 (running version: 5.2-10/6f892b40)
pve-kernel-4.15: 5.2-8
pve-kernel-4.15.18-5-pve: 4.15.18-24
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-41
libpve-guest-common-perl: 2.0-18
libpve-http-server-perl: 2.0-11
libpve-storage-perl: 5.0-30
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-3
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-20
pve-cluster: 5.0-30
pve-container: 2.0-29
pve-docs: 5.2-9
pve-firewall: 3.0-14
pve-firmware: 2.0-6
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
pve-zsync: 1.7-1
qemu-server: 5.0-38
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.11-pve2~bpo1

Code:
root@ns3091370:~# pvesm status
Name                       Type     Status           Total            Used       Available        %
containers              zfspool     active      1885339648       808695884      1076643764   42.89%
ftpbackup_ns3091370         nfs     active       524288000       305582080       218705920   58.29%
local                       dir     active      1077679744         1036032      1076643712    0.10%

Code:
root@ns3091370:~# pveperf
CPU BOGOMIPS:      60672.00
REGEX/SECOND:      3377653
HD SIZE:           1027.76 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     81.51
DNS EXT:           10.61 ms
DNS INT:           0.75 ms (ns3091370)

Code:
root@ns3091370:~# pveperf /var/lib/vz
CPU BOGOMIPS:      60672.00
REGEX/SECOND:      3047013
HD SIZE:           1027.76 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     104.71
DNS EXT:           10.97 ms
DNS INT:           0.70 ms (ns3091370)



Code:
root@ns3091370:~# zpool status -v
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 13h54m with 0 errors on Sun Dec  9 14:18:56 2018
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors

Code:
root@ns3091370:~# cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=32803788k,nr_inodes=8200947,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=6564916k,mode=755 0 0
rpool/ROOT/pve-1 / zfs rw,relatime,xattr,noacl 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
sunrpc /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=40,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=2112 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
rpool /rpool zfs rw,noatime,xattr,noacl 0 0
rpool/ROOT /rpool/ROOT zfs rw,noatime,xattr,noacl 0 0
rpool/data /rpool/data zfs rw,noatime,xattr,noacl 0 0
rpool/subvol-100-disk-0 /rpool/subvol-100-disk-0 zfs rw,noatime,xattr,posixacl 0 0
rpool/subvol-101-disk-0 /rpool/subvol-101-disk-0 zfs rw,noatime,xattr,posixacl 0 0
rpool/subvol-102-disk-0 /rpool/subvol-102-disk-0 zfs rw,noatime,xattr,posixacl 0 0
rpool/subvol-103-disk-0 /rpool/subvol-103-disk-0 zfs rw,noatime,xattr,posixacl 0 0
rpool/subvol-104-disk-0 /rpool/subvol-104-disk-0 zfs rw,noatime,xattr,posixacl 0 0
lxcfs /var/lib/lxcfs fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
ftpback-rbx6-9.ovh.net:/export/ftpbackup/ns3091370.ip-54-36-120.eu /mnt/pve/ftpbackup_ns3091370 nfs4 rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=54.36.120.91,local_lock=none,addr=10.21.131.101 0 0
 
Last edited:
>>I just bought a new server and install proxmox 5.2-10 but I have a lot of IO delay and don't know how to improve FSYNCS!

no miracle here, in general , you need faster drives or raid hardware with cache.

as you used zfs, you can use a small (enterprise) ssd drive for zfs journal.