ceph usage gives you raw utilization numbers. if you use ceph df detail it will give you actual statistics including compression (which I suspect are the cause for the difference between du and df)
-cpu x86-64-v3,... and -cpu host,... are not the same thing. x86-64-v3 is a named CPU model / ABI baseline, while host is host passthrough. In QEMU terms, named models expose a predefined, stable feature set, and host passthrough exposes the host...
This can be done without any downtime. and yes, you want to make sure that the running ceph version is available on the next distro- so make sure before you start you upgrade your ceph to squid. This process is non disruptive and will not...
that should be your first option ;)
PVE8 is not EOL yet, and even when it is (this august) you can keep running for some time after. might be the better option, especially if you dont need anything pve9 specific. This will give you time to...
so here's what I'd suggest-
dont use 10.13.30.x for corosync at all.
assign arbitrary addresses to bond0 and bond1; --edit- ON DIFFERENT SUBNETS. ideally, they should be on seperate vlans too. use those addresses as ring0 and ring1.
@alexskysilk
You're right to call that out. "Very small environments" was a poor choice of words on my part. The distinction I was trying to make isn't about node count, it's about business value. We have customers running three node clusters...
Hi @nvanaert ,
There's quite a bit to unpack here.
First and foremost, Proxmox VE does not monitor storage or network path health. An All Paths Down (APD) condition will not result in a node fence, nor does it interact with the VM HA subsystem...