I believe this is correct as it has been reported since 6.14 that crc32c is exposed via library...:
https://bbs.archlinux.org/viewtopic.php?pid=2236193#p2236193
I changed the two settings you mentioned, but it's sadly still the same. Some other things to try?
The settings for the Linux VM are as following and with these settings the Linux VM boots and the GPU is accessible. It would be nice we could do...
Hi all,
I'm trying to set up a new node with a T5820 and have had no issues up until this point. The node has 2x 256GB drives in RAID1 for boot (on a PCIe card, 1 NVMe over PCIE and 1 connected by SATA cable), 2x 500GB SATA drives for a VMpool...
Actually my system also shows:
[root@pve ~]# grep CRC_OPTIMIZATIONS /boot/config-$(uname -r)
CONFIG_CRC_OPTIMIZATIONS=y
[root@pve ~]#
But the BTRFS is still not loading crc32c-intel. Which is not availble with 6.17 on proxmox (and was also not...
Hey, really thanks. I'll try it.
About the switch on the same UPS, yes. In this case, I used to have a switch between them, but then I discovered that the only 2g5 connection that could take benefit of that are those nodes, so I made a direct...
The fix should be straightforward — just enabling CONFIG_CRC_OPTIMIZATIONS=y in the kernel config. This affects not just BTRFS but also ext4, f2fs, and iSCSI, all of which were...
Since your nodes are in a Proxmox cluster, SSH keys are already exchanged between them.
That makes this pretty painless.
SSH shutdown from node 1's NUT script, Node 1 already has a working NUT client, so you just add a script that SSHs into node...
Hello everyone,
"Und täglich grüßt das Murmeltier"
I just upgraded my kernel from 6.17.4-2 to 6.17.9-1.
Sadly now I'm missing following binaries/scripts:
- arcstat
- arc_summary
Is it possible to include them in the next releases and how can I...
At https://packages.debian.org/forky/amd64/zfsutils-linux/filelist I can see there are /usr/bin/zarcstat and /usr/bin/zarcsummary
Maybe those were renamed in the newer version?
What about man zarcstat ?
P.S.
Indeed...
Wow ! Thank you for all those answers !
I did not expect to trigger such a thread haha :D
Running `zpool events -v` right after a failure shows multiple `ereport.fs.zfs.dio_verify_wr` events. I believe I have the same issue than...
I have a small homelab composed by 3 (old-ish) thinkcentre.
One of them runs home services, and also the home NAS, so I plugged my UPS USB connection there, and NUTs is working fine.
My proxmox is composed by two Proxmox 9.1.5 in cluster...
I didn’t think that opting for the hyper-converged implementation of Ceph in PVE would require giving up basic functionality of Ceph (the dashboard and the SMB mgr module).
And it’s not clear if this is intentional or not, which is why I asked...
Hello, I would like to describe issue with crc32c algorithm. I will be clear - I am not sure if this is Proxmox kernel issue or generally upstream for distro (Debian 13?).
The case is about kernel 6.17.9-1-pve. When BTRFS filesystem is mounted...
already checked all the network connectivity and could not find and issues with that.
I also already have two corosync rings on different network cards that also don't seem to have an issue in general.
My guess is, that the biggest issue was...
Your cpupower frequency-info output on the R740 is the key here:
no or unknown cpufreq driver is active on this CPU
Your working R730 has intel_cpufreq with the performance governor. The R740 has no frequency scaling driver at all. So the Xeon...
I don't understand that an issue from 2020 is still not even acknowledged as an issue? Am I missing something here??
The issue is that there is still a bug where the users are not associated with the top level groups AD groups if they a part of...
The short answer is yes. the longer answer is you need to take into consideration what ceph daemons are running on the node and account for them in the interim.
moving all but OSDs are trivial- just create new ones on other nodes and delete the...
You're looking for:
To display a list of messages:
ceph crash ls
If you want to read the message:
ceph crash info <id>
then:
ceph crash archive <id>
or:
ceph crash archive-all
I am having a similar issue as well, I have created the following bug report for it - https://bugzilla.proxmox.com/show_bug.cgi?id=7289
But basically noticing if the VM is off (or a template) you are unable to clone/move the storage however if...