Tell Ceph to benchmark those drives again on OSD start and restart the service when appropriate:
ceph config set osd osd_mclock_force_run_benchmark_on_init true
There's also another ceph tell like command to run a benchmark right now, but I...
There is a new QEMU 10.1 package available in the pve-test and pve-no-subscription repositories for Proxmox VE 9.
After internally testing QEMU 10.1 for over a month and having this version available on the pve-test repository almost as long, we...
Those PVE logs only show that PVE is removing the network interfaces related to VMID 101 and 106. Check event log inside the VM. I have some Win2025 test VMs running 24x7 both on PVE8.4 and PVE9 without such issue.
Although doesn't seem to be...
Windows does that (assign an APIPA address) when the IP is in use somewhere in the network and some device replies to the ARP discover for the address you've entered in the configuration.
I may be missing something here, but keep in mind that files != zvol: you can't use neither the script nor zfs-rewrite to make VMs disk(s) "move" to use the newly added vdev. It would work for something like a PBS datastore. These options could...
I don't agree: you have 3 copies of your data, you have host HA, there is no SPOF and you can easily grow up if/when needed.
With proper sizing you can even tolerate the loss of some OSD in any host and still allow Ceph to self heal. If you lose...
Looks like I posted almost at the same time as @dcsapak. Maybe you or @mariol may take a look at the official documentation to mention how to nest pools and that they support permissions inheritance. I've been unable to find that information in...
Since PVE8.1[1] (section "Access control") you can have 3 levels of nested resource pool and apply permissions with inheriatance if you use "propagate". I think this is what you are looking for.
Unfortunately, I haven't found that in the manual...
For me Option A: 2-node + QDevice with Ceph is the worst idea ever (as explained above), Option D: 3-node with ZFS replication makes no sense having other options, and Option E: 2-node + QDevice with other clustered iSCSI SAN is a no go due to...
Placing them in RAID 0 disables pass through and probably removes the partition headers of the drive and / or hides them from the host. Without a more precise answer to "What happens exactly?" its all guessing...
It isn't supported. A RAID 0 disk isn't supported either. You best bet is to change the controller personality to IT mode, if possible.
What happens exactly? There has been no change regarding this. My bet is that your drives had some kind of...
Extending on previous reply, what GC does is:
Phase 1: read the index of each backup that contains the list of chunks used by that backup snapshot. Then updates access time on each chunk of that backup snapshot. It does so with each backup...
You won't be able to recover space from expired backups and your datastore will eventually become full. GC must work for PBS te behave as it is designed.
@Chris , would it be possible to implement an alternate GC that uses modify or creation...
FWIW, a couple weeks ago I tried to use a Dell DD3300, quite similar to OPs EMC Data Domain storage, and it refused to update access time neither via NFS nor CIFS. In my case, PBS 4 did show an error during datastore creation and refused to...
Dunno and It's impossible to guess it without logs and full configs. Given that with a different, UDP and unsigned, transport it works good for you, seems that your switch/network misbehaves with the standard kronosnet protocol (TCP, encrypted)...
There is no such thing. The only supported version on PVE 9 is Ceph Squid. If this is an updated cluster from PVE8, you should have updated repos and Ceph to Squid [1].
You package list show packages version 19.2.*, not Quincy ones (17.2.*)...
Again, Corosync does not use multicast with the default kronosnet transport, multicast isn't the issue.
It does and also does not support native link redundacy IIRC.
Unless you set it manually or this is a cluster that has been upgraded since ancient times, your PVE is using unicast. PVE does not use multicast since PVE 4.x IIRC, when Corosync 3.x was introduced with the use of unicast kronosnet.
Post your...
Sorry, but for me it's unclear what the problem is, how to reproduce / when does it happen, and what's different in your setup from a standard PVE installation were there's no issue like those you seem to describe. IIUC, you use/need some custom...
If you don't mind powering off the source system, this [1] may help. I've haven't tried it yet, so I can't really say how good it could be to backup and specially on restore.
[1] https://www.apalrd.net/posts/2024/pbs_image/