Opt-in Linux 6.1 Kernel for Proxmox VE 7.x available

hi, I'm completely new to proxmox. I wrote the current proxmox VE 7.3 to a usb stick. so far so good. Unfortunately, after "INstall Proxmox VE" I also get the error "UBSAN: array-index-out-of-bounds in drivers/ata/libahci.c:968:41index 15 is out of range for type ahci_em_priv [8].

So I can't get it installed at all. Now I have read that a new kernel 6.x is described here. Can this also fix the problem during the initial installation? Unfortunately I can't find a complete ISO file to flash it on a USB stick.

Where can I download Proxmox 7.3 with kernel 6.x (ie with a working kernel) ?
 
"INstall Proxmox VE"
As in after selecting the grub boot menu entry from the installer, i.e., before the installation setup wizard?

error "UBSAN: array-index-out-of-bounds in drivers/ata/libahci.c:968:41index 15 is out of range for type ahci_em_priv [8].
Seems like this got indeed fixed in 6.1 with the following commit:
https://git.kernel.org/pub/scm/linu.../?id=1e41e693f458eef2d5728207dbd327cd3b16580a
FWIW, that commit also got backported to a newer 5.15 kernel, but only in January so after our 7.3 release.

Where can I download Proxmox 7.3 with kernel 6.x (ie with a working kernel) ?
There isn't any available at the moment, you could try to set up Proxmox VE on top of Debian and then update to our newest 5.15 kernel or possible even 6.1.
 
I upgraded my 2 PVE nodes to 6.1.10 and I noticed my docker v23 LXC with ZFS finally works with the overlay2 storage driver. No more fuse-overlayfs. Really nice.

I wonder if someone could point me to what has changed at kernel level to finally allow support for this. Thanks in advance.
 
  • Like
Reactions: mac.linux.free
I upgraded my 2 PVE nodes to 6.1.10 and I noticed my docker v23 LXC with ZFS finally works with the overlay2 storage driver. No more fuse-overlayfs. Really nice.

I wonder if someone could point me to what has changed at kernel level to finally allow support for this. Thanks in advance.
Hmm, that's odd. The Kernel didn't have to change, ZFS lacked the support for renameat2 sycall/flags, and while that got merged a few months ago, it wasn't backported to the 2.1 series and so isn't yet included in any release.

Skimming the OverlayFS git log since 5.19 no change that would drop/replace requirement of that syscall stuck out to me - are you sure you're using that driver?
 
are you sure you're using that driver?

Before 6.1, with the zfs LXC, the default storage driver would be the terrible vfs. So the only way was changing docker's config to use fuse-overlayfs. I installed 6.1, uninstalled fuse-overlayfs in the LXC, rebooted the pve node and then to my surprise in the LXC docker container I see this:

Code:
❯ uname -a
Linux docker2 6.1.10-1-pve #1 SMP PREEMPT_DYNAMIC PVE 6.1.10-1 (2023-02-07T00:00Z) x86_64 GNU/Linux
❯ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.10.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.6.1
    Path:     /root/.docker/cli-plugins/docker-compose
  scan: Docker Scan (Docker Inc.)
    Version:  v0.23.0
    Path:     /usr/libexec/docker/cli-plugins/docker-scan

Server:
 Containers: 29
  Running: 29
  Paused: 0
  Stopped: 0
 Images: 29
 Server Version: 23.0.1
 Storage Driver: overlay2
  Backing Filesystem: zfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 2456e983eb9e37e47538f59ea18f2043c9a73640
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.10-1-pve
 Operating System: Debian GNU/Linux 11 (bullseye)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 8GiB
 Name: docker2
 ID: 1528994b-2a9b-49f8-8321-e1109edf5160
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: alexdelprete
 Registry: https://index.docker.io/v1/
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 
not a bug, but kernel 6.1 have remove netfilter conntrack auto help sysctl
https://lore.kernel.org/netdev/20220921073825.4658-2-fw@strlen.de/T/#u

It's need for some protocol like ftp.

explicit rules should be done like

Code:
 modprobe nf_nat_ftp
 iptables -A FORWARD -m conntrack --ctstate RELATED -m helper --helper ftp -p tcp --dport 1024: -j ACCEPT
 iptables -t raw -A PREROUTING -p tcp --dport 21 -j CT --helper ftp

(I don't have tested them yet).

I think that pve-firewall should add support for them.

more info:
https://home.regit.org/netfilter-en/secure-use-of-helpers/
 
  • Like
Reactions: t.lamprecht
Hi devs,

I hope Proxmox can backport the Intel Arc Driver to the next edge kernel, if not going full fledged Linux 6.2. That piece of hardware will be a godsend for many of us running AMD platforms without an integrated graphics but needs the encoder to do transcoding in LXC containers.
 
After running `apt install pve-kernel-6.1` I receive the message:

Code:
dkms: running auto installation service for kernel 6.1.10-1-pve:Error! Your kernel headers for kernel 6.1.10-1-pve cannot be found.
Please install the linux-headers-6.1.10-1-pve packag

I tried to install it with apt install but it says:

Code:
E: Unable to locate package linux-headers-6.1.10-1-pve
E: Couldn't find any package by glob 'linux-headers-6.1.10-1-pve'

Please advise
 
Code:
E: Unable to locate package linux-headers-6.1.10-1-pve
E: Couldn't find any package by glob 'linux-headers-6.1.10-1-pve'
Install pve-headers-6.1 to get the (latest) pve-kernel 6.1.x headers (now and in the future).
Install pve-headers-6.1.10-1-pve for that specific version of pve-kernel-6.1.10-1-pve.
 
Install pve-headers-6.1 to get the (latest) pve-kernel 6.1.x headers (now and in the future).
Install pve-headers-6.1.10-1-pve for that specific version of pve-kernel-6.1.10-1-pve.
Thanks that was it. The only reason I need that is for nvidia and dkms. Do I actually need the nvidia drivers installed on the host, or can I just install them on the guest and use GPU passthrough? I'm thinking I can remove the nvidia driver from the proxmox host and still use the nvidia driver from within the Ubuntu container.
 
Thanks that was it. The only reason I need that is for nvidia and dkms. Do I actually need the nvidia drivers installed on the host, or can I just install them on the guest and use GPU passthrough? I'm thinking I can remove the nvidia driver from the proxmox host and still use the nvidia driver from within the Ubuntu container.

You don't passthrough to containers. You passthrough to a VM. For LXC containers, you need drivers on the host and the guest LXC containers. The container is given access to the resource via cgroups v2.
 
Before 6.1, with the zfs LXC, the default storage driver would be the terrible vfs. So the only way was changing docker's config to use fuse-overlayfs. I installed 6.1, uninstalled fuse-overlayfs in the LXC, rebooted the pve node and then to my surprise in the LXC docker container I see this:

Code:
❯ uname -a
Linux docker2 6.1.10-1-pve #1 SMP PREEMPT_DYNAMIC PVE 6.1.10-1 (2023-02-07T00:00Z) x86_64 GNU/Linux
❯ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.10.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.6.1
    Path:     /root/.docker/cli-plugins/docker-compose
  scan: Docker Scan (Docker Inc.)
    Version:  v0.23.0
    Path:     /usr/libexec/docker/cli-plugins/docker-scan

Server:
 Containers: 29
  Running: 29
  Paused: 0
  Stopped: 0
 Images: 29
 Server Version: 23.0.1
 Storage Driver: overlay2
  Backing Filesystem: zfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 2456e983eb9e37e47538f59ea18f2043c9a73640
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.10-1-pve
 Operating System: Debian GNU/Linux 11 (bullseye)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 8GiB
 Name: docker2
 ID: 1528994b-2a9b-49f8-8321-e1109edf5160
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: alexdelprete
 Registry: https://index.docker.io/v1/
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
how did you get this working? privileged lxc?
 
I use a test server which I suspend regularly (systemctl suspend), but with this kernel it immediately resets the server after waking up. Back to 5.15 no issues.
 
After upgrade to 6.1 PM does not work
View attachment 47450
Is there any bug open?
What's PM, and from what kernel (or other packages) did you upgrade to which one?

From the limited information of the screenshot it seems that the base system boots fine itself.
I'd recommend opening a new thread and providing more information - also can you login on the console and check the syslog (journalctl -b) for telling errors and what services failed (systemctl list-units --failed).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!