Hi,
I have two machines (in different locations) which has exactly same hardware as well as proxmox versions (4) running. The first machine was deployed two years ahead of the second one. The first server has more network transactions than the second one.
It is strange that smartctl states that the SSD used for ZIL and ZLARC in the second server deployed later worn out completely while the first one is intact. However 'zpool status' shows no errors and log and cache online to suspect whether smartct reports false positive? 'systemctl staus' shows datapool 'degraded', but zpool status shows no issues (ONLINE)!
Details below:
FIRSTOLDSEVER=2 HDDs single rpool + SSD for ZIL and ZLARC
SSDRAPIDWEAROUT= 4HDDs striped rpool and striped datapool + SSD for ZIL and ZLARC
The outputs are as of below:
I have two machines (in different locations) which has exactly same hardware as well as proxmox versions (4) running. The first machine was deployed two years ahead of the second one. The first server has more network transactions than the second one.
It is strange that smartctl states that the SSD used for ZIL and ZLARC in the second server deployed later worn out completely while the first one is intact. However 'zpool status' shows no errors and log and cache online to suspect whether smartct reports false positive? 'systemctl staus' shows datapool 'degraded', but zpool status shows no issues (ONLINE)!
Details below:
FIRSTOLDSEVER=2 HDDs single rpool + SSD for ZIL and ZLARC
SSDRAPIDWEAROUT= 4HDDs striped rpool and striped datapool + SSD for ZIL and ZLARC
The outputs are as of below:
Code:
2018-06-19 09:11:46 root@FIRSTOLDSERVER:[~]:$ lsblk -dt /dev/sd?
NAME ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE RA WSAME
sda 0 512 0 512 512 0 deadline 128 128 0B
sdb 0 512 0 512 512 1 deadline 128 128 0B
sdc 0 512 0 512 512 1 deadline 128 128 0B
2018-06-19 09:11:55 root@SSDRAPIDWEAROUT:[~]:$ lsblk -dt /dev/sd?
NAME ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE RA WSAME
sda 0 512 0 512 512 0 deadline 128 128 0B
sdb 0 4096 0 4096 512 1 noop 128 128 0B
sdc 0 512 0 512 512 1 deadline 128 128 0B
sdd 0 4096 0 4096 512 1 noop 128 128 0B
sde 0 512 0 512 512 1 deadline 128 128 0B
[/CODE}
[CODE]
2018-06-19 09:13:23 root@FIRSTOLDSERVER:[~]:$ zdb | grep ashift
ashift: 12
ashift: 9
2018-06-19 09:13:37 root@SSDRAPIDWEAROUT:[~]:$ zdb | grep ashift
ashift: 12
ashift: 12
ashift: 12
ashift: 9
Code:
2018-06-19 09:23:04 root@FIRSTOLDSERVER:[~]:$ zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
rpool 231G 1.59T 21 22 574K 140K
2018-06-19 10:25:04 root@SSDRAPIDWEAROUT:[~]:$ zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
datapool 116G 3,51T 1 7 37,9K 25,2K
rpool 77,5G 1,74T 2 44 25,3K 120K
---------- ----- ----- ----- ----- ----- -----
Last edited: