Hello all,
i have a ZFS problem, my production server has way less performance then my testing server, and i am trying to find the cause for last 2 days. Ill provide anything you need in form of logs. Can you help me find the root cause?
Both servers:
Server 1: (Test server)
Server 2: (Production server)
i have a ZFS problem, my production server has way less performance then my testing server, and i am trying to find the cause for last 2 days. Ill provide anything you need in form of logs. Can you help me find the root cause?
Both servers:
- SmartCTL -t long /dev/sdx=> reports all ok
- All partitions are aligned
- ZFS Compression=on, sync=standard (data integrity is important)
- Connected via 2 NICs with bonding on (lacp layer3+4)
- Disks are ALL WD RED NAS 4x4TB for spinning drives, 1x Samsung 850EVO Pro
- ZFS is used as ROOT FIlesystem (mounted on /)
- All Sata links report 6GB/s also everything is in AHCI mode
- Motherboard (production server is enterprise-grade supermicro with ipmi, test is just some random asus board)
- Processor (Production server has 2 physical cpus)
- Ram (Production server has DDR4 with ecc, test does not have that)
Server 1: (Test server)
Code:
[root@px0003:~]# pveversion -v
proxmox-ve: 5.0-15 (running kernel: 4.10.15-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.10.15-1-pve: 4.10.15-15
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-10
qemu-server: 5.0-12
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-8
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-14
pve-firewall: 3.0-1
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
Code:
[root@px0003:~]# pveperf
CPU BOGOMIPS: 67200.80
REGEX/SECOND: 2748754
HD SIZE: 7099.24 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 750.84
DNS EXT: 35.12 ms
DNS INT: 38.78 ms (xxxxxxxx)
Code:
[root@px0003:~]# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 8 00:24:52 2017
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
logs
sdc1 ONLINE 0 0 0
cache
sdc2 ONLINE 0 0 0
Server 2: (Production server)
Code:
[root@px0001:~]# pveversion -v
proxmox-ve: 4.4-96 (running kernel: 4.4.83-1-pve)
pve-manager: 4.4-18 (running version: 4.4-18/ef2610e8)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-53
qemu-server: 4.0-113
pve-firmware: 1.1-11
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.0-5~pve4
pve-container: 1.0-101
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
Code:
[root@px0001:~]# pveperf
CPU BOGOMIPS: 153619.20
REGEX/SECOND: 2396879
HD SIZE: 7099.24 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 197.00
DNS EXT: 7.99 ms
DNS INT: 9.22 ms (xxxxxxxx)
Code:
[root@px0001:~]# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 1h29m with 0 errors on Tue Oct 24 12:50:40 2017
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
logs
sde1 ONLINE 0 0 0
cache
sde2 ONLINE 0 0 0
errors: No known data errors