Just updated and it now seems to work fine for me.
proxmox-ve: 6.1-2 (running kernel: 5.3.13-2-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-2
pve-kernel-helper: 6.1-2
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1...
Your right, looking at it it's kernel 4.14 and 4.19.
I can't confirm this NFS bit is the fix as we haven't moved the containers back and the above mentioned microcode updates in he BIOS.
So at the end of the Rabbit hole i think i found this.
https://about.gitlab.com/blog/2018/11/14/how-we-spent-two-weeks-hunting-an-nfs-bug/
And i checked the output of mount -v | grep and sure enough Node1 had V4 NFS mount, and Node2 had it mounted as V3.
I've changed both to use V3 but have not...
So a little bit of hardware info.
DL380 Gen9
The crashed system was
System ROM
P89 v2.60 (05/21/2018)
System ROM Date
05/21/2018
E5-2620 v3
with microcode loaded 2019-03-01
Tho we have been running fine for 2 days on a older firmware node.
System ROM
P89 v2.52 (10/25/2017)
System...
We mount our backup location via NFS, so we get quite a spike in traffic.
I didn't boot on an older kernel as he jump between the two kernel versions was huge.
Got ours running on a node with older HP firmware and so far it seems ok.
Strange, we had a simmar issue with our node, gonna reboot tonight. The console and web management is unresponsive, but SSH works very slow.
Ours are HP Gen9 with quite old hp firmware.
Containers seem to work tho...
Nice simple option is just publish RDP on two diffrent ports, and have each port forward to a different machine, or change the port RDP is listening on on one machine (in case of nat loop back working on your router, tho i sound like it is from above)
###WARNING###
RDP is not really considered...
As i side note, i see your machine name is TVHeadEnd. I have huge stutter issues with my PCIe FreeSat card when passing it through to a virtual (it maybe been PCI lanes being congested) but it seems rock solid in a container with access to the /dev/dvb
Would node 1 (still on 5.4 but corosync 3) still be part of the HA cluster so i can quickly migrate to node2 and node3 (and also test the newer kernal works with my containers)
I'm following the guide PVE 5 to 6 but i wanted to check that after i have stopped all the HA services and performed the corosync upgrade and then upgraded all the nodes other packages. Will my containers all still be running fine on the node i left them on ?
or will i need to reboot each of the...
I take it you've done a clean boot of the host, since the last try ? as the card might need to be reset gracefully.
further than that i'm out of options, having not one this on my box for years now.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.