We recently uploaded the 6.17 kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option.
We plan to use the 6.17 kernel as the new default for the Proxmox VE 9.1 release later in...
Hi, it seems like your host’s vmbr0 only shows link-local (fe80::…) — no global IPv6. That suggests the host bridge isn’t getting or relaying a global IPv6, so containers won’t have an upstream path.
Can you check vmbr0 has a proper global IPv6...
If you are running a production system, I suggest waiting for the official packages. Otherwise, if you have a test system, sure, testing is always appreciated!
We are pleased to announce the first stable release of Proxmox Mail Gateway 9.0 - immediately available for download!
Twenty years after its first release, the new version of our email security solution is based on Debian 13.1 "Trixie", but...
We are pleased to announce the first beta release of Proxmox Mail Gateway 9.0! The new version of our email security solution is based on Debian 13.1 "Trixie", but defaulting to the newer Linux kernel 6.14.11-2. The latest versions of ZFS 2.3.4...
Hi @bravo0916
Sorry, I gave you an incomplete answer. PVE is based on Debian with a Ubuntu kernel. There isn’t a Debian HCL but there is some compatibility information available from Ubuntu.
Others have asked similar questions...
Proxmox does not publish hardware compatibility lists (HCL).
Under the hood, Proxmox products are Debian. If the NIC will work with Debian, it will work with Proxmox products.
We have found that Intel is a very reliable network interface card...
After months of hard work and collaboration with our community, we are thrilled to release the beta version of Proxmox Datacenter Manager. This version is based on the great Debian 13 "Trixie" and comes with a 6.14.11 Kernel as stable default and...
Hi y’all,
I’ve released a Proxmox hardening guide (PVE 8 / PBS 3) that extends the CIS Debian 12 benchmark with Proxmox specific tasks.
Repo: https://github.com/HomeSecExplorer/Proxmox-Hardening-Guide
A few controls are not yet validated and are...
Mostly iirc, but I haven't revisited the patch series in a while so i'd have to re-check tbh. I think it would be good to fix the issues before we can apply the patch series.
We're currently working on overhauling the whole stack with the help...
Hi @whiney , welcome to the forum.
At this point in time you are using the primary supported option.
Today, your only option to have all of the above is to use a 3rd party cluster-aware filesystem, i.e. OCFS. I'd recommend researching its...
Yes, we don't talk about the same thing. It's not ONLY the filesystem, it's also the backend storage (where there is no filesystem) and that's what I'm talking about. Unless you have a SHARED write cache, as good enterprise SAN storage solutions...
Thats an interesting take. For someone who derides others for being fanboys, that statement shows an astounding lack of self awareness.
ceph is a scaleout filesystem with multiple api ingress points. zfs is a traditional filesystem and not...
The entire chassis is not redundant, the backplane electrically is likely going to be redundant, if you use SAS drives, you can reach the disk from ‘two channels’. Backplanes are not passive components, they have controllers, depending on model...
I can speak about my solaris 2 nodes zfs that died 15year ago, with 3 days of downtime because of zil crash on 2 disk at the same time if you want ;)
Never had a single downtime with ceph since 2015.
what about rollback on 2 previous snapshot...