The number of cluster members is an inexact limit. that ACTUAL limit has to do with how much data the cluster members have to keep synchronized- if each of your cluster members had 400vms with continuous api traffic- your cluster would probably...
You need two corosync links. For 12 nodes on gigabit I would use dedicated links for both, just in case, even if having it just for Link0 would be enough. Max I've got in production with gigabit corosync is 8 hosts, no problems at all.
yes.
you've already been given answers, you just dont like them.
reinstall and restore from backup. fixing your install is more complicated and will require you to read documentation instead of just posting questions that are covered there.
Given the logs you posted, I would start by removing docker from that host (it's not officially supported) and not exposing critical services like ssh to the internet. You also mention "VNC", which makes me think maybe you installed PVE on top of...
There is RSTP [1]
Maybe, but it does allow to use both links simultaneously while on RTSP only one is in use and the other is fallback only.
Which you should have anyway, connected to two switches with MLAG/stacking to avoid the network being...
If Ceph doesn't let you write is because some PG(s) don't have enough OSD to fulfill the size/min.size set on a pool. In a 3 host Ceph cluster, for that to happen you either have to:
Lose 2 hosts: you won't have quorum neither on Ceph nor on PVE...
That data means little if you don't post the exact fio test you ran. AFAIR, the benchmark that Ceph does is a 4k write bench to find out the IOps capacity of the drive. You should bench that with fio. Also, I would run the same bench on a...
Tell Ceph to benchmark those drives again on OSD start and restart the service when appropriate:
ceph config set osd osd_mclock_force_run_benchmark_on_init true
There's also another ceph tell like command to run a benchmark right now, but I...
There is a new QEMU 10.1 package available in the pve-test and pve-no-subscription repositories for Proxmox VE 9.
After internally testing QEMU 10.1 for over a month and having this version available on the pve-test repository almost as long, we...
Those PVE logs only show that PVE is removing the network interfaces related to VMID 101 and 106. Check event log inside the VM. I have some Win2025 test VMs running 24x7 both on PVE8.4 and PVE9 without such issue.
Although doesn't seem to be...
Windows does that (assign an APIPA address) when the IP is in use somewhere in the network and some device replies to the ARP discover for the address you've entered in the configuration.
I may be missing something here, but keep in mind that files != zvol: you can't use neither the script nor zfs-rewrite to make VMs disk(s) "move" to use the newly added vdev. It would work for something like a PBS datastore. These options could...
I don't agree: you have 3 copies of your data, you have host HA, there is no SPOF and you can easily grow up if/when needed.
With proper sizing you can even tolerate the loss of some OSD in any host and still allow Ceph to self heal. If you lose...
Looks like I posted almost at the same time as @dcsapak. Maybe you or @mariol may take a look at the official documentation to mention how to nest pools and that they support permissions inheritance. I've been unable to find that information in...
Since PVE8.1[1] (section "Access control") you can have 3 levels of nested resource pool and apply permissions with inheriatance if you use "propagate". I think this is what you are looking for.
Unfortunately, I haven't found that in the manual...
For me Option A: 2-node + QDevice with Ceph is the worst idea ever (as explained above), Option D: 3-node with ZFS replication makes no sense having other options, and Option E: 2-node + QDevice with other clustered iSCSI SAN is a no go due to...
Placing them in RAID 0 disables pass through and probably removes the partition headers of the drive and / or hides them from the host. Without a more precise answer to "What happens exactly?" its all guessing...
It isn't supported. A RAID 0 disk isn't supported either. You best bet is to change the controller personality to IT mode, if possible.
What happens exactly? There has been no change regarding this. My bet is that your drives had some kind of...
Extending on previous reply, what GC does is:
Phase 1: read the index of each backup that contains the list of chunks used by that backup snapshot. Then updates access time on each chunk of that backup snapshot. It does so with each backup...