Say I have a namespace called "PVE" where PVE cluster stores it's backups. In the same datastore, I have another namespace called "DELETED". When a VM is deleted from the PVE cluster, I move it's backups from namespace "PVE" to "DELETED" in order...
from last corosync release
https://github.com/corosync/corosync/releases
"A new option (totem.ip_dscp) is available to configure DSCP for traffic
prioritization. Thanks to David Hanisch for this great improvement."
could be interesting for...
Seeing this issue with nested PVE on PVE on EPYC 9124 CPU. Host has SWAP and KSM enabled, although it has plenty of free memory (100GB of 512GB). This cluster currently runs PVE8.4.14 and runs other workloads too besides nested PVE VMs, with...
I've already discussed about it here [1]. When I really have the need to keep the last backup of the last day of the month, I use this on PBS:
Create a namespace "MONTHLY"
Create a daily sync job that runs "if day is 28 to 31 of each month at...
It works ok for me on v147.0.2 and v147.0.3, both on Linux, accesing noVNC from PVE9.0.x, PVE9.1.x and PVE8.x. Try it from a VM to discard some issue with cache/whatever on your current PC.
Dear Proxmox-Community, we are asking for your support.
The European Commission has opened a Call for Evidence on the initiative European Open Digital Ecosystems, an initiative that will support EU ambitions to secure technological sovereignty...
As stated above, it can be done with sync jobs + manual deletion from the source namespace. Currently, local sync jobs only allow to sync between different datastores (I'm still wondering why). You will have to add that PBS itself as a remote so...
Thanks, but that's unrelated to the issue I described.
Problem is that I end up with two .link files for nic0 because pve-network-interface-pinning doesn't recognize there's already a pinned name due to different .link file naming scheme. Both...
That's worst design possible. Beside the too cheap devices...
Look at one node: when one OSD fails, the other one on the same node has to take over the data from the dead one. It can not be sent to another node because there are already copies...
Yes, they cost more and will get really expensive in the coming months, but second hand SATA/SAS are easy to find and not that costly. In the long run they end up being cheaper as they don't degrade as fast as consumer ones, so won't need to...
Don't want to start an argument here, but whoever told you that has little idea what is a PVE Ceph mesh cluster. Linux kernel routing may use like 0'1% of CPU and FRR may use like 3% CPU while converging or during node boot for a few seconds. If...
Added a comment to that bugzilla post....hopefully someone from Proxmox can chime in on this issue. I'm unsure still if this is an actual bug or if its something that I'm doing with the config of the filters.....or even something else.
When I need to, I use an AD backend and filter by groups that I create specifically to manage PVE privileges. Never had the need to use nested groups as the environments I've used this were not big enough to justify nesting groups or not creating...
For once, AI is right :)
Any consumer drive will have low Ceph performance due to rocksDB and sync writes, but those drives in particular are terrible for anything but PC archiving purposes due to it's small SLC cache and very slow QLC nand...
This lab is using PVE9.1.4, no subscription (although don't think there's anything different in Enterprise repo in this regard). Using nested PVE to test Ceph configs, etc. When installing the nested PVE with the iso I choosed to pin the network...
Hi,
it is a racy bug that's is fixed in qemu-server >= 9.1.3 with:
https://git.proxmox.com/?p=qemu-server.git;a=commit;h=b82c2578e7a452dd5119ca31b8847f75f22fe842...