Thanks for the heads up. Pretty sure most where created with Ceph Reef except a few that got recreated recently with Squid 19.2.3. I'm aware of that bug, but given that I don't use EC pools (Ceph bugreport mentions it seems to only happen on OSD...
Best of luck to you * fingers crossed *. I had to rebuild the whole cluster in my clients case and fix ceph by manually restoring placement groups - which was a pain.
@fstrankowski I'm fully aware about the risks of OSD being full and know how to deal with that, but in any case an OSD should break because of that ;)
Definitely fragmentation has an impact on this and will watch it more closely from now on...
Initially i'd like to raise concerns about the amount of available storage already beeing in use. By default CEPH doesnt allow more then 80% so you'd have to take precautions really soon while taking these concerns into consideration.
I'd highly...
In short: if Ceph warns you about something, do something about it.
Read the full bugreport and found this comment [1]: "This issue seems to mostly affect disks which were heavily fragmented.". Mine are and in fact I have some warnings related...
PVE8.4.14 + Ceph 19.2.3, 3 node cluster. All disks are PCIe NVMe. Different pools, some with zstd compression enabled.
I'm seeing OSD crashing lately with the same failure. Journal shows that it is unable to properly run RocksDB with an assert...
@Zappes please, link the Bugzilla you mention here [1]. I would love to be able pull (not push) encrypted syncs where the source is unencrypted for any reason and destination PBS must store encrypted backups.
[1]...
Say I have a namespace called "PVE" where PVE cluster stores it's backups. In the same datastore, I have another namespace called "DELETED". When a VM is deleted from the PVE cluster, I move it's backups from namespace "PVE" to "DELETED" in order...
from last corosync release
https://github.com/corosync/corosync/releases
"A new option (totem.ip_dscp) is available to configure DSCP for traffic
prioritization. Thanks to David Hanisch for this great improvement."
could be interesting for...
Seeing this issue with nested PVE on PVE on EPYC 9124 CPU. Host has SWAP and KSM enabled, although it has plenty of free memory (100GB of 512GB). This cluster currently runs PVE8.4.14 and runs other workloads too besides nested PVE VMs, with...
I've already discussed about it here [1]. When I really have the need to keep the last backup of the last day of the month, I use this on PBS:
Create a namespace "MONTHLY"
Create a daily sync job that runs "if day is 28 to 31 of each month at...
It works ok for me on v147.0.2 and v147.0.3, both on Linux, accesing noVNC from PVE9.0.x, PVE9.1.x and PVE8.x. Try it from a VM to discard some issue with cache/whatever on your current PC.
Dear Proxmox-Community, we are asking for your support.
The European Commission has opened a Call for Evidence on the initiative European Open Digital Ecosystems, an initiative that will support EU ambitions to secure technological sovereignty...
As stated above, it can be done with sync jobs + manual deletion from the source namespace. Currently, local sync jobs only allow to sync between different datastores (I'm still wondering why). You will have to add that PBS itself as a remote so...
Thanks, but that's unrelated to the issue I described.
Problem is that I end up with two .link files for nic0 because pve-network-interface-pinning doesn't recognize there's already a pinned name due to different .link file naming scheme. Both...
That's worst design possible. Beside the too cheap devices...
Look at one node: when one OSD fails, the other one on the same node has to take over the data from the dead one. It can not be sent to another node because there are already copies...