Proxmox VE 8.3 released!

You mean with the kernel module? FWIW, we got X540-AT2 in some of our test hardware, and it works fine there. E.g., I got a long-term three node test cluster that each has such a NIC, and it's configured as ceph cluster full-mesh network. Two nodes use the 6.8 kernel and one node uses the 6.11 kernel; the NIC work fine both kernels.

Maybe open a new thread and describe the rest of the hardware and the problems you're seeing.
Honestly, I have no idea how you managed to get that card working with Proxmox, because I've been at it for a week without any success. I was hoping the new version of Proxmox would finally fix the issue, but still no luck. I already opened a thread about this not long ago (https://forum.proxmox.com/threads/u...rk-card-on-proxmox-ve-8-2.157176/#post-720139), and the only suggestion I received was to run the version 7 kernel on Proxmox 8. It works, sure, but... it's far from ideal.

If needed, I can provide a dedicated server ready to go with IPMI/KVM access, to mount the latest Proxmox ISO for debugging purposes, so the issue can be seen firsthand.
 
  • Like
Reactions: Johannes S
Honestly, I have no idea how you managed to get that card working with Proxmox, because I've been at it for a week without any success.
Intel, like other vendors, sells many hardware revisions under the same model name; so this might be a revision specific issue, or related to the rest of the hardware. I'll answer in your thread.
 
  • Like
Reactions: Johannes S
This sounds like it could be an unintended effect of [1], which adjusts node.session.initial_login_retry_max to decrease an overly high default login timeout. Can you please open a new thread and provide:
  • The content of /etc/pve/storage.cfg
  • The output of the following commands:
    Code:
    iscsiadm -m node
    iscsiadm -m session
    ls -al /dev/disk/by-path/*iscsi*
  • The output of:
    Code:
    journalctl -b -u open-iscsi -u iscsid -u pvestatd
  • How do you adjust the node.session.initial_login_retry_max? Changing /etc/iscsi/iscsi.conf with new discovery, re-logins, rescans, reboots
  • Before upgrading to PVE 8.3, did you have any custom config set in /etc/iscsi/iscid.conf? NOPE
  • Can you clarify what you mean by "automatic login"? Do you mean the login to iSCSI storages performed by PVE on boot? EXACTLY
    Similarly, what do you mean by "manual login"? Do you mean manually running iscsiadm -m node --login? EXACTLY
Please mention me (@fweber) in the new thread.

[1] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=e16c816f97a865a709bbd2d8c01e7b1551b0cc97
Hi @fweber,

You are likely correct in your assumption. When pvestatd activates the storage, it gets blocked during that time, which is indeed undesirable. In our case, the issue was caused by an active portal that didn’t have any allowed targets for the clients.

We have now removed that portal on all hosts, reset the node.session.initial_login_retry_max value in /etc/iscsi/iscsi.conf back to 8, and rebooted the system.

On our test system, all targets were automatically connected during boot, and everything seems to be working as expected now.

Thanks for pointing us in the right direction!
 
It seems like the upgrade broke oidc login. GUI: `OpenID redirect failed. Connection error - server offline?`, browser console gives 596 for `https://<instance>/api2/extjs/access/openid/auth-url`. No logs on oidc provider side. The provider reachability looks fine from the terminal. No (journal) logs on Proxmox side about the login attempt whatsoever.
 
don't know, if it was mentioned somewhere, but ova import not working from cephfs
tried on different clusters
CephFS does not support the image content type, so you need to select a different working storage in the import UI where the OVA can be temporarily extracted to.
 
@t.lamprecht - great release guys!!! Looking forward to playing with the SDN enhancements.

Was a little dismayed though, still no MACVTAP support :(

Hope we can get this in soon, been on and off requested for a few years now, I even filed a friendly request: https://bugzilla.proxmox.com/show_bug.cgi?id=5795

Anyhow, all the best!

ps... will there ever be any iscsi boot support built into the gui for guests? at the moment I do a lot of scripting to achieve something that would be sooo streamlined if built-in - pss.. I don't mean the storage pool side, I mean the NIC flexboot iscsi side, I use SR-IOV to allocate virtual nics and it would be nice to have iscsi support integrated for that.
 
Last edited:
yes, this seems obvious, but the problem is that it is possible to add cephfs as import storage ...
Yes, you can indeed import from CephFS, but not import to CephFS – for the intermediate extraction we create already temporary disks and thus require the images content type for the working directory storage, which is not available for CephFS, and until now we avoid exposing that content type for CephFS to avoid users mistakenly place their disk images there, when Ceph RBD would be the much better option.
 
It seems like the upgrade broke oidc login. GUI: `OpenID redirect failed. Connection error - server offline?`, browser console gives 596 for `https://<instance>/api2/extjs/access/openid/auth-url`. No logs on oidc provider side. The provider reachability looks fine from the terminal. No (journal) logs on Proxmox side about the login attempt whatsoever.
Could you open a new thread and provide the following information in there?
* which OIDC provider do you use?
* did you customize any settings for the OIDC realm?
* can you access the .well-known/openid-configuration file using the issuer URL?

Please also add @mira in the new post so that I get the notification.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!