Hi,
Just fixed this with commit 0b760eefc3ae8787e4d83235ca20e2ae886cf166.
In TrueNAS-Scale the slashes (/) needed to be converted to dashes (-) which it was doing but only the first slash (/). Adding the 'g' parameters to the regex substitution fixed this.
You may need to 'edit' your extents...
I found that the pvestatd service was not running after a 6.4-13 to 7.0-11 upgrade. Just did a 'pvestatd restart' and the grey questions marks turned to the green icons.
Not sure why this happened but it seems fine now.
Ok. Thanks for that. I will give Version 3 a try. Lot of hard work and info in that article but was not sure if it was fixed in any of the kernels Proxmox is using.
Thanks for posting it.
This machine does have the CVE microcode fixes...
[ 22.919602] ------------[ cut here ]------------
[ 22.919604] General protection fault in user access. Non-canonical address?
[ 22.919611] WARNING: CPU: 13 PID: 2990 at arch/x86/mm/extable.c:126 ex_handler_uaccess+0x52/0x60
[ 22.919612]...
This is the dmesg output after the server rebooted with another kworker tainted issue.
[ 17.464801] ------------[ cut here ]------------
[ 17.464802] General protection fault in user access. Non-canonical address?
[ 17.464812] WARNING: CPU: 16 PID: 1582 at arch/x86/mm/extable.c:126...
I just realized that the CPU's in these machines do NOT have the new Microcode applied to them (For the CVE's) . I will attempt to do that update in the near future if this could cause an issue.
Since upgrading from 6.0 to 6.1 I have had two different host machine generate the following similar error. It always had kworker as the issue PID.
First system became unresponsive and unrecoverable over time. Tried to reboot the host safely but had to power reset the physical host.
This one is...
Just posting another 'Oops' dump. I am hoping it is the same issue.
[1406350.728555] BUG: kernel NULL pointer dereference, address: 0000000000000014
[1406350.728598] #PF: supervisor read access in kernel mode
[1406350.728624] #PF: error_code(0x0000) - not-present page
[1406350.728638] PGD 0...
Thomas,
In your post you used...
apt update
apt full-upgrade
I always thought we had to do...
apt update
apt dist-upgrade
When should we use the 'full-upgrade'?
Thx,
-Waz
I have a few systems that noVNC seems to behave differently between them.
If I open a noVNC to a non-running VM of course I get a "Failed to connect to server" but if I use the noVNC commands from the GUI to start it, it will sit and wait for a connection and present me with the logo and...
Thomas,
Thank you for the info. Thankfully it was my lab machine (home) and not production/customers machines...always do it in the lab ;). I will mark this post because I have gone through many upgrades from previous versions that might have caused the issue. I think the only thing I install...
Reinstalled from the ISO 5.3-5 and recovered my ZFS and ZPOOL's. Restored the configs and all was fine in the world again. Then, just because, I did the upgrade again to 5.3-11 with that kernel 4.15.18-12 that just jack me and rebooted with no issues.
Go figure. Hopefully it was just a fluke...
Just did an upgrade to 4.15.18-12-pve which looked like ZFS changes only and now my system boots into a read-only state just after 'cgmanager' starts and that's it. Just a '(none):~#' prompt.
I am running ZFS on the server. Scientific Progress Goes Boink.
-Waz
Hi,
They are correct when they say it is a multicast issue. Please reference the link below and follow the instructions for you network switch equipment.
https://pve.proxmox.com/wiki/Multicast_notes
My switch equipment is a Netgear M4300 stack and even though the defaults say it is enabled...
Good day to all.
I found a repo in the Github by Andrew Beam 'github.com beam/freenas-proxmox' that seems to be a nice fit for the FreeNAS solution. It does NOT use the standard SSH/SCP architecture that the other interfaces use. It instead uses the FreeNAS API's to do all the work, it just...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.