Since I haven't seen it mentioned yet, there is also our Ceph Benchmark paper from late 2023 [1]
[1] https://www.proxmox.com/images/download/pve/docs/Proxmox-VE-Ceph-Benchmark-202312-rev0.pdf
Then imho this warrants writing to office@proxmox.com to ask for help. I mean you payed for support, so I see no reason, why you should figure this out on your own ;) As far I know the person/s behind that adress handle everything subscription...
@x509
corosync is on 1G "private" link
no redundancy for 25G - single node failure is accepted as datacenter in which those servers are has 24h service and spare parts - faulty node will be up in matter of minutes / hours
there is total of 70 VMs...
Just wanted to add a quick comment for anyone that might find this post via google or similar:
This is no longer needed, as the whole functionality (including comments/notes) is now built into vma-to-pbs directly. It can just be pointed at a...
After reading through the various discussions about the O_DIRECT bug/feature, I understand that this primarily affects VMs configured with cache mode set to none (which is the default in Proxmox).
I’m planning to run the C code to reproduce the...
Hi, for everyone who has boot issues on kernel 6.17.4-2 if Intel VMD is enabled: One option would be to try kernel 6.17.9-1 (currently on pve-test [1] ), it contains a potential fix [2] for the issue. If you test 6.17.9-1, it would be great if...
Thanks for you quick reply. I checked and there's no hint:
etc/fstab is practically empty.
systemctl status '*.mount' not hints. (shows a lot, also the new and fine working cifs-shares, but not the old ones.)
I scrolled back a bit in journal...
My bad - i checked again: The messages don't appear after the last reboot. So i guess in the system there was some function that still tried (for weeks now!) to communicate with the once lost cifs / nfs connections. But after a reboot this...
If by redundancy you mean disk fault tolerance, the higher the value after "raidz" the higher the fault tolerance. In practice, raidz2+ (never use single parity raidz unless prepared to lose the pool at any time)
performance= striped mirrors...
* FYI, bzfs is written in Python (not a "shell script"), compression is configurable via CLI options, and it is similarly robust as zrepl, and far more robust and reliable than syncoid.
* I think zrepl (and syncoid) have different focus and...
Ok, we already did zpool replace -f <pool> <old-device> <new-device>
But now we should remove device, wipe it, resinsert it and do the following:
sgdisk <healthy bootable device> -R <new device>
sgdisk -G <new device>
zpool replace -f...
Hey everyone, I have a 3-node PVE cluster that I've been using for testing and learning the platform in a business setting.
Initially I had my PVE nodes connected to a Mellanox SX1024 40G switch.
A few months ago, I spent some time and...
Get a backup first, then see if there are any available BIOS/firmware updates. Even if there are not, what you posted should not bother you too much.
With the large amount of 3rd and 4th tier hardware out there, backed by generic BMC software -...
It's up and my server is running! I'm going to run a backup on it now so I don't lose all the changes I made since the previous backup (about a year old....argh!) I know, I had another identical server running in production and this was a copy...
In the future, if you do decide to mount a device that might potentially disappear - you should use on the options discussed here:
https://unix.stackexchange.com/questions/53456/what-is-the-difference-between-nobootwait-and-nofail-in-fstab...