Proxmox's Linux kernel (6.14) is based on Ubuntu instead of Debian and since drivers come with the kernel, maybe try an Ubuntu (installer without installing it) with the same kernel version (25.04).
EDIT: The user-space is indeed based on Debian...
You already found the answer. the fact you're moving the goalposts isnt helping you. I'd advise to get rid of your "wants"- the newer kernel is probably providing you with no utility at all. Given that the issues with your NIC are known and...
U = FOS
Lots of us here are in tech-support related positions, so forum support gets to seem like "more work" after a while.
A) Watch proxmox-related youtube videos
B) Read the last 30 days of forum posts, here and on Reddit (free education)...
the forum is community so it is a highlight that staff members are even present and answer questions patiently
what do you expect (seriously, literally)?
To be (much) clearer I was referencing 3 hosts and assuming multiple OSD on each, with at least one left running, not 3 hosts with only 1 OSD.
For the former, Ceph will use any other OSD on the same host (technically any unused host, but there...
Does it?
With the failure domain being "host" this does not make sense...? I am definitely NOT a Ceph expert, but now I am interested in the actual behavior:
I have a small, virtual Test-Cluster with Ceph. For the following tests three Nodes...
no. if you lose three disks on three separate nodes AT THE SAME TIME, the pool will become read only and you'll lose all payload that had placement group with shards on ALL THREE of those OSDs.
BUT here's the thing- the odds of that happening...
You will only lose the affected PGs and their objects. This will lead to corrupted files (when the data pool is affected) or a corrupted filesystem (if the metadata pool is affected). Depending on which directory is corrupted you may not be able...
You may be able to extract the cluster map from the OSDs following this procedure: https://docs.ceph.com/en/squid/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds
But as you also changed the IP addresses you will have to change...
IMHO you do not need pool separation between VMs for security reasons. You may want to configure multiple pools for quota or multiple Proxmox clusters. Or if you want to set defferent permissions for users in Proxmox.
AFAIK Proxmox does not show...