Proxmox VE 8.3 released!

You mean with the kernel module? FWIW, we got X540-AT2 in some of our test hardware, and it works fine there. E.g., I got a long-term three node test cluster that each has such a NIC, and it's configured as ceph cluster full-mesh network. Two nodes use the 6.8 kernel and one node uses the 6.11 kernel; the NIC work fine both kernels.

Maybe open a new thread and describe the rest of the hardware and the problems you're seeing.
Honestly, I have no idea how you managed to get that card working with Proxmox, because I've been at it for a week without any success. I was hoping the new version of Proxmox would finally fix the issue, but still no luck. I already opened a thread about this not long ago (https://forum.proxmox.com/threads/u...rk-card-on-proxmox-ve-8-2.157176/#post-720139), and the only suggestion I received was to run the version 7 kernel on Proxmox 8. It works, sure, but... it's far from ideal.

If needed, I can provide a dedicated server ready to go with IPMI/KVM access, to mount the latest Proxmox ISO for debugging purposes, so the issue can be seen firsthand.
 
  • Like
Reactions: Johannes S
Honestly, I have no idea how you managed to get that card working with Proxmox, because I've been at it for a week without any success.
Intel, like other vendors, sells many hardware revisions under the same model name; so this might be a revision specific issue, or related to the rest of the hardware. I'll answer in your thread.
 
  • Like
Reactions: Johannes S
This sounds like it could be an unintended effect of [1], which adjusts node.session.initial_login_retry_max to decrease an overly high default login timeout. Can you please open a new thread and provide:
  • The content of /etc/pve/storage.cfg
  • The output of the following commands:
    Code:
    iscsiadm -m node
    iscsiadm -m session
    ls -al /dev/disk/by-path/*iscsi*
  • The output of:
    Code:
    journalctl -b -u open-iscsi -u iscsid -u pvestatd
  • How do you adjust the node.session.initial_login_retry_max? Changing /etc/iscsi/iscsi.conf with new discovery, re-logins, rescans, reboots
  • Before upgrading to PVE 8.3, did you have any custom config set in /etc/iscsi/iscid.conf? NOPE
  • Can you clarify what you mean by "automatic login"? Do you mean the login to iSCSI storages performed by PVE on boot? EXACTLY
    Similarly, what do you mean by "manual login"? Do you mean manually running iscsiadm -m node --login? EXACTLY
Please mention me (@fweber) in the new thread.

[1] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=e16c816f97a865a709bbd2d8c01e7b1551b0cc97
Hi @fweber,

You are likely correct in your assumption. When pvestatd activates the storage, it gets blocked during that time, which is indeed undesirable. In our case, the issue was caused by an active portal that didn’t have any allowed targets for the clients.

We have now removed that portal on all hosts, reset the node.session.initial_login_retry_max value in /etc/iscsi/iscsi.conf back to 8, and rebooted the system.

On our test system, all targets were automatically connected during boot, and everything seems to be working as expected now.

Thanks for pointing us in the right direction!
 
It seems like the upgrade broke oidc login. GUI: `OpenID redirect failed. Connection error - server offline?`, browser console gives 596 for `https://<instance>/api2/extjs/access/openid/auth-url`. No logs on oidc provider side. The provider reachability looks fine from the terminal. No (journal) logs on Proxmox side about the login attempt whatsoever.
 
don't know, if it was mentioned somewhere, but ova import not working from cephfs
tried on different clusters
CephFS does not support the image content type, so you need to select a different working storage in the import UI where the OVA can be temporarily extracted to.
 
CephFS does not support the image content type, so you need to select a different working storage in the import UI where the OVA can be temporarily extracted to.
yes, this seems obvious, but the problem is that it is possible to add cephfs as import storage ...
 
@t.lamprecht - great release guys!!! Looking forward to playing with the SDN enhancements.

Was a little dismayed though, still no MACVTAP support :(

Hope we can get this in soon, been on and off requested for a few years now, I even filed a friendly request: https://bugzilla.proxmox.com/show_bug.cgi?id=5795

Anyhow, all the best!

ps... will there ever be any iscsi boot support built into the gui for guests? at the moment I do a lot of scripting to achieve something that would be sooo streamlined if built-in - pss.. I don't mean the storage pool side, I mean the NIC flexboot iscsi side, I use SR-IOV to allocate virtual nics and it would be nice to have iscsi support integrated for that.
 
Last edited:
yes, this seems obvious, but the problem is that it is possible to add cephfs as import storage ...
Yes, you can indeed import from CephFS, but not import to CephFS – for the intermediate extraction we create already temporary disks and thus require the images content type for the working directory storage, which is not available for CephFS, and until now we avoid exposing that content type for CephFS to avoid users mistakenly place their disk images there, when Ceph RBD would be the much better option.
 
It seems like the upgrade broke oidc login. GUI: `OpenID redirect failed. Connection error - server offline?`, browser console gives 596 for `https://<instance>/api2/extjs/access/openid/auth-url`. No logs on oidc provider side. The provider reachability looks fine from the terminal. No (journal) logs on Proxmox side about the login attempt whatsoever.
Could you open a new thread and provide the following information in there?
* which OIDC provider do you use?
* did you customize any settings for the OIDC realm?
* can you access the .well-known/openid-configuration file using the issuer URL?

Please also add @mira in the new post so that I get the notification.
 
I've had several hard reboots or lockups a day on my main server since updating from 8.2 to 8.3, sorry, not a lot to go on, its headless.

Is there any logs I can look into after the fact? Should I try pinning a previous kernel version, see if that helps?
 
I've had several hard reboots or lockups a day on my main server since updating from 8.2 to 8.3, sorry, not a lot to go on, its headless.

Is there any logs I can look into after the fact? Should I try pinning a previous kernel version, see if that helps?
Do you see any kernel panics/errors in the journal? journalctl --since '2024-11-2x'
Change the `x` to the day you first started seeng those issues.
You could also check /var/lib/systemd/pstore/ for any files that could have information about the crashes. Sometimes these contain the last bits of the dmesg from a previous crash.

After gathering this information, please open a new thread and provide it there. Feel free to @ me so that I get notified.
 
I've had several hard reboots or lockups a day on my main server since updating from 8.2 to 8.3, sorry, not a lot to go on, its headless.

Is there any logs I can look into after the fact? Should I try pinning a previous kernel version, see if that helps?
try upgrade the kernel to 6.11, for me it was the solution.
 
I ran the update and looks like it took me from 8.2 to 8.3.

Everything seems fine. Nothing skipped a beat. Do I have to restart? It didn't say so.
I would have expected you to see a new kernel with that upgrade. Are you sure it didn't say what my 8.2.4 to 8.3.0 upgrade said? I received the following output at the end of the upgrade (as I have in the past anytime a kernel was upgraded.)

Seems you installed a kernel update - Please consider rebooting
this node to activate the new kernel.

In any case, I'm glad to hear you are having a good time. I am now also a (mostly) former VSphere admin and I am really enjoying the Proxmox Virtual Environment and Backup Server products and community.
 
Proxmox VE 8.3 working fine.

Got this today:
Code:
[423050.856015] pvedaemon worke[93327]: segfault at 6ffc6b4e6000 ip 00006ffc7889c680 sp 00007ffcb9727cc8 error 4 in libc.so.6[6ffc7875f000+155000] likely on CPU 11 (core 11, socket 0)
[423050.856484] Code: 75 98 48 81 ea 80 00 00 00 0f 86 eb 00 00 00 48 81 c7 a0 00 00 00 48 01 fa 48 83 e7 80 48 29 fa 62 b1 fd 28 6f c0 0f 1f 40 00 <62> f3 7d 20 3f 0f 00 c5 fd 74 57 20 c5 fd 74 5f 40 c5 fd 74 67 60

Seems to continue working ok even after that.

Code:
Nov 27 15:23:52 pve pvedaemon[93327]: <root@pam> starting task UPID:pve:0001731F:0285835D:67471D68:vncproxy:204:root@pam:
Nov 27 15:23:55 pve pvedaemon[93327]: <root@pam> end task UPID:pve:0001731F:0285835D:67471D68:vncproxy:204:root@pam: OK
Nov 27 15:23:55 pve kernel: pvedaemon worke[93327]: segfault at 6ffc6b4e6000 ip 00006ffc7889c680 sp 00007ffcb9727cc8 error 4 in libc.so.6[6ffc7875f000+155000] likely on CPU 11 (core 11, socket 0)
Nov 27 15:23:55 pve kernel: Code: 75 98 48 81 ea 80 00 00 00 0f 86 eb 00 00 00 48 81 c7 a0 00 00 00 48 01 fa 48 83 e7 80 48 29 fa 62 b1 fd 28 6f c0 0f 1f 40 00 <62> f3 7d 20 3f 0f 00 c5 fd 74 57 20 c5 fd 74 5f 40 c5 fd 74 67 60
Nov 27 15:23:55 pve pvedaemon[4943]: worker 93327 finished
 
Proxmox VE 8.3 working fine.

Got this today:
Code:
[423050.856015] pvedaemon worke[93327]: segfault at 6ffc6b4e6000 ip 00006ffc7889c680 sp 00007ffcb9727cc8 error 4 in libc.so.6[6ffc7875f000+155000] likely on CPU 11 (core 11, socket 0)
[423050.856484] Code: 75 98 48 81 ea 80 00 00 00 0f 86 eb 00 00 00 48 81 c7 a0 00 00 00 48 01 fa 48 83 e7 80 48 29 fa 62 b1 fd 28 6f c0 0f 1f 40 00 <62> f3 7d 20 3f 0f 00 c5 fd 74 57 20 c5 fd 74 5f 40 c5 fd 74 67 60

Seems to continue working ok even after that.

Code:
Nov 27 15:23:52 pve pvedaemon[93327]: <root@pam> starting task UPID:pve:0001731F:0285835D:67471D68:vncproxy:204:root@pam:
Nov 27 15:23:55 pve pvedaemon[93327]: <root@pam> end task UPID:pve:0001731F:0285835D:67471D68:vncproxy:204:root@pam: OK
Nov 27 15:23:55 pve kernel: pvedaemon worke[93327]: segfault at 6ffc6b4e6000 ip 00006ffc7889c680 sp 00007ffcb9727cc8 error 4 in libc.so.6[6ffc7875f000+155000] likely on CPU 11 (core 11, socket 0)
Nov 27 15:23:55 pve kernel: Code: 75 98 48 81 ea 80 00 00 00 0f 86 eb 00 00 00 48 81 c7 a0 00 00 00 48 01 fa 48 83 e7 80 48 29 fa 62 b1 fd 28 6f c0 0f 1f 40 00 <62> f3 7d 20 3f 0f 00 c5 fd 74 57 20 c5 fd 74 5f 40 c5 fd 74 67 60
Nov 27 15:23:55 pve pvedaemon[4943]: worker 93327 finished
Please run debsums -s.
If it doesn't show any changed packages, then you may want to run a memtest on your host.

Did you upgrade the kernel recently? Does it also happen with the previous kernel?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!