Hello, i changed from full supermicro server board to Odroid H3+x3 (Pentium Silver N6005)
I was also experiencing crazy amount of crashes, updated kernel to
5.15.74-1-pve
5.19.7-2-pve
6.0.12-edge
Updated microcode, but was still experiencing crashes
What solved my problem was disabling the...
The SSD makes all the difference, but better underlying drives make the difference too man.
Buy a 240GB DATACENTER SSD the datacenter part is important
then you just do (lets assume the ssd is /dev/sdc)
blkdiscard /dev/sdc
first create partitions
parted /dev/sdc mklabel gpt
parted /dev/sdc...
Hello,
I was moving the proxmox to a new arrays from ZFS powered backend. I am now running on LVM-Thin on top of an Adaptec-7805 card with ZMM/BBU.
When i was restoring our old zimbra installation i was hit with this bug. The previous installation was Proxmox-ve 5.0 some version i am not sure...
Hello @chalan
i have indeed solved the problem.
1. Update ZFS to 0.7.3, this alone helped a lot.
2. Dont use WD-Red drives with ZFS and Supermicro, they dont work very well together. Use WD-Gold Drives if you want performance.
3. If you really want performance buy HW raid card with BBU and...
Hello, the server has 64gb ram, but ive dedicated 16GB for arc only
Also the SSD is this one
https://www.alza.cz/samsung-ssd850-pro-512gb-d2143376.htm?o=2
Can you recommend some ssd suited for slog?
Im using SSD only as log device, no cache. ive dedicated 16GB ram for arc on each server by editing /etc/modprobe/zfs.conf also the server was not under any load ive installed it and just found out, that stuff that takes my notebook about 15secs takes about 30 minutes on the supermicro server...
Hello, the thing is im trying to determine, why clearly the better server has such gulf in performance. Im trying to determine if i forgot something, some setting which should be enabled/disable on supermicro compared to the test server. 5x slower disks for no apparent reason is weird at best...
All seems same to me
Prod:
[root@px0001:~]# dd if=/dev/zero of=/dev/null bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 5.71882 s, 18.3 GB/s
[root@px0001:~]# zpool get all
NAME PROPERTY VALUE...
Hello all,
i have a ZFS problem, my production server has way less performance then my testing server, and i am trying to find the cause for last 2 days. Ill provide anything you need in form of logs. Can you help me find the root cause?
Both servers:
SmartCTL -t long /dev/sdx=> reports all...
Hello i have
pve-manager/4.0-50/d3a6b7e5 (running kernel: 4.2.2-1-pve)
running some mysql instance and ive began modifying the ZFS underneath it to get this result
NAME RECSIZE LOGBIAS PRIMARYCACHE
rpool...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.