Wow ! Thank you for all those answers !
I did not expect to trigger such a thread haha :D
Running `zpool events -v` right after a failure shows multiple `ereport.fs.zfs.dio_verify_wr` events. I believe I have the same issue than...
Thank you!! Yes, indeed it was the setting in the system profile! I changed it to Performance Per Watt (OS), and now it is much smoother.
By the way, I did run the benchmark (geekbench) tool that you recommended. The new system scores 2X points...
There appears to be work going on to address this:
https://bugzilla.proxmox.com/show_bug.cgi?id=7289
https://lore.kernel.org/qemu-devel/20260105143416.737482-1-f.ebner@proxmox.com/T/
Blockbridge : Ultra low latency all-NVME shared storage for...
The file/location is not checked at the time of setting the option. It is possible that the config does not exist at the time of VM creation. One could dynamically generate the config on VM start. You will be notified at the time of VM start or...
Es gibt natürlich zahlreiche valide Ansätze. Ich verwende (im Homelab) "Zamba": https://github.com/bashclub/zamba-lxc-toolbox. Damit bekommt man einen ausgereiften AD-kompatiblen Fileserver, der für die Windows-Nutzer beispielsweise die...
I’ve got a site where I dive into this stuff—might be worth checking out if you're interested.
https://www.romcinrad.com.ar/guia-definitiva-passthrough-de-igpu-amd-780m-phoenix-en-proxmox/
This is what threw me off.
I had the feeling the `qm set --cicustom` was silently failing because I saw no feedback from the UI.
Also, that same command doesn't show an error message if the config path is wrong.
Yes, that's the culprit.
I've been there once. The lesson I learned was to only modify the structure of a cluster when all nodes are online :-)
The workaround is to make corosync.conf editable. As that node has not quorum you need to mount the...
Wow ! Thank you for all those answers !
I did not expect to trigger such a thread haha :D
Running `zpool events -v` right after a failure shows multiple `ereport.fs.zfs.dio_verify_wr` events. I believe I have the same issue than...
I recently attempted to upgrade my PBS to kernel 6.17.9-1-pve and starting having all kinds of issues. I assumed it was my controller card going bad, but I'm thinking it's an incompatibility issue with the kernel. I have a Supermicro Broadcom...
Update: For the recent installs I used Ventoy. Somehow that seems to have messed with the procedure, even though there were no erros. Still not sure how things happened, though, but with 'dedicated' USB keys the installation worked again.
I didn’t think that opting for the hyper-converged implementation of Ceph in PVE would require giving up basic functionality of Ceph (the dashboard and the SMB mgr module).
And it’s not clear if this is intentional or not, which is why I asked...
Also ich erstelle mir ein neues ZFS Dataset, binde (Mointpoint) es an einen LXC mit Debian 13, auf dem dann der Samba Server läuft.
Die Rechte, zwischen Proxmox VE Host und LXC, müssen dann noch entsprechned angeglichen werden.
Netzwerkseitig...
Hello all,
We had a 3 node proxmox and ceph HCI Cluster that worked fine. After a hardware Error that we could not really pinpoint happend on node 2 did we shut that one down take home and repair. It was a few times turned on without network to...