I took vm.nr_hugepages=72 from https://github.com/extremeshok/xshok-proxmox I don't know why to be honest.
I don't know where the error is that's what I am asking. It posts all those messages instantly, noting before for several minutes and then just shuts down/reboots.
Specs:
X570D4U
Ryzen...
I seem to have an issue with one of my machines https://pastebin.com/zK4KZL9r it keeps crashing, the memory is near full, it's only swapping a few GB.
The following is being used
vm.swappiness=20
options zfs zfs_arc_min=10737418240
# Set to use 20GB Max
options zfs zfs_arc_max=21474836480
##...
I tired to find a way to do this in the Proxmox GUI too I don't know why they have not added it.
I use a script to do this https://github.com/Jehops/zap
I figured out what was wrong but I am not sure exactly what it was. It's one of these changes I presume IOMMU
https://gyazo.com/d0e8c1b60a82550fa237ad6464f15ee6
https://gyazo.com/ba5a87521990efa6aaeda96645e567a2
https://gyazo.com/ab52675e3f8f59bc99dd5a1a831cff9c
Tried a different disk, CPU and all RAM sticks, same issue. I have three servers all exact same setup/hardware. I don't think increasing the RAM will do anything (I tested it too same issue) I am only transferring some templates after a reboot currently no production.
arc_summary...
It will either boot up as enp133s0f or enp129s0f, what would cause it to keep changing?
4: enp133s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:10:18:c3:a1:80 brd ff:ff:ff:ff:ff:ff
5: enp133s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop...
One of my servers is taking a very long time to migrate data. As you see from this screenshot, it does eventually complete
syslog shows the following, is it a sign of a bad disk? nvme2n1p1 & nvme3n1p1 are in the ZFS pool
May 13 16:45:16 HOME1 zed: eid=90 class=delay pool='zfs' vdev=nvme3n1p1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.