Wow ! Thank you for all those answers !
I did not expect to trigger such a thread haha :D
Running `zpool events -v` right after a failure shows multiple `ereport.fs.zfs.dio_verify_wr` events. I believe I have the same issue than...
I have a small homelab composed by 3 (old-ish) thinkcentre.
One of them runs home services, and also the home NAS, so I plugged my UPS USB connection there, and NUTs is working fine.
My proxmox is composed by two Proxmox 9.1.5 in cluster...
I didn’t think that opting for the hyper-converged implementation of Ceph in PVE would require giving up basic functionality of Ceph (the dashboard and the SMB mgr module).
And it’s not clear if this is intentional or not, which is why I asked...
Hello, I would like to describe issue with crc32c algorithm. I will be clear - I am not sure if this is Proxmox kernel issue or generally upstream for distro (Debian 13?).
The case is about kernel 6.17.9-1-pve. When BTRFS filesystem is mounted...
already checked all the network connectivity and could not find and issues with that.
I also already have two corosync rings on different network cards that also don't seem to have an issue in general.
My guess is, that the biggest issue was...
Your cpupower frequency-info output on the R740 is the key here:
no or unknown cpufreq driver is active on this CPU
Your working R730 has intel_cpufreq with the performance governor. The R740 has no frequency scaling driver at all. So the Xeon...
I don't understand that an issue from 2020 is still not even acknowledged as an issue? Am I missing something here??
The issue is that there is still a bug where the users are not associated with the top level groups AD groups if they a part of...
The short answer is yes. the longer answer is you need to take into consideration what ceph daemons are running on the node and account for them in the interim.
moving all but OSDs are trivial- just create new ones on other nodes and delete the...
You're looking for:
To display a list of messages:
ceph crash ls
If you want to read the message:
ceph crash info <id>
then:
ceph crash archive <id>
or:
ceph crash archive-all
I am having a similar issue as well, I have created the following bug report for it - https://bugzilla.proxmox.com/show_bug.cgi?id=7289
But basically noticing if the VM is off (or a template) you are unable to clone/move the storage however if...
Wow ! Thank you for all those answers !
I did not expect to trigger such a thread haha :D
Running `zpool events -v` right after a failure shows multiple `ereport.fs.zfs.dio_verify_wr` events. I believe I have the same issue than...
Thank you!! Yes, indeed it was the setting in the system profile! I changed it to Performance Per Watt (OS), and now it is much smoother.
By the way, I did run the benchmark (geekbench) tool that you recommended. The new system scores 2X points...
There appears to be work going on to address this:
https://bugzilla.proxmox.com/show_bug.cgi?id=7289
https://lore.kernel.org/qemu-devel/20260105143416.737482-1-f.ebner@proxmox.com/T/
Blockbridge : Ultra low latency all-NVME shared storage for...
The file/location is not checked at the time of setting the option. It is possible that the config does not exist at the time of VM creation. One could dynamically generate the config on VM start. You will be notified at the time of VM start or...
Es gibt natürlich zahlreiche valide Ansätze. Ich verwende (im Homelab) "Zamba": https://github.com/bashclub/zamba-lxc-toolbox. Damit bekommt man einen ausgereiften AD-kompatiblen Fileserver, der für die Windows-Nutzer beispielsweise die...
I’ve got a site where I dive into this stuff—might be worth checking out if you're interested.
https://www.romcinrad.com.ar/guia-definitiva-passthrough-de-igpu-amd-780m-phoenix-en-proxmox/
This is what threw me off.
I had the feeling the `qm set --cicustom` was silently failing because I saw no feedback from the UI.
Also, that same command doesn't show an error message if the config path is wrong.
Yes, that's the culprit.
I've been there once. The lesson I learned was to only modify the structure of a cluster when all nodes are online :-)
The workaround is to make corosync.conf editable. As that node has not quorum you need to mount the...