You no longer have a cluster. or at least not a functioning one. before proceeding to troubleshoot adding a node, do you want to rescue the existing cluster, or start from scratch? if you want to rescue the existing cluster, you need to establish...
from a node in the cluster and the node being added, post the content of:
/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (from a node already in the cluster)
you CAN have a gateway outside the scope of a subnet, BUT it will only work one way- traffic will route OUT to the gateway properly but incoming traffic will not.
OP, pay attention to what your PC address is and gateway. if its not in the same...
Again: In this thread only other community members are participating. If you don't want to purchase additional subscriptions for migration (which is understandable) but don't want to switch repos to non-enterprise (for whatever reasons) you need...
that is almost never so. ceph requires fast networks and SSDs to be performant. MD3200i doesnt even support anything faster then 1gb, and yet will yield more satisfying results at the cost of entry (likely free or next to free.) For your use case...
post logs from the storage and host for the timeperiod the drop occured. it would also be good to look at your /etc/network/interfaces file, as well as your network setting from the storage (not just ip but connection speed and mtu)
I'm guessing...
Nodes failing isnt the issue. the quorum device is meant as defense against a silo connectivity "tie". Anything short of a room losing connectivity would be handled normally without it.
There are MANY. A simple script of freeze, rdiff signature, rdiff delta, thaw would get you where you want to be. It would be harmless to run it nightly- hell, even every hour but you can also condition the run with a modified date check if that...
More modern than what? My hardware is usually 2-5 years old before replacement.
I wouldnt know. I have no use case for this.
Passing cpu=host to the guest allows Windows to enable vulnerabilty mitigation code (spectre, heartbleed, etc) which is...
https://learn.microsoft.com/en-us/windows-hardware/design/minimum/windows-processor-requirements
you CAN make Windows 11/2025 work with an older cpu, but the consequences are slow performance. no amount of hypervisor tweaking is going to fix that.
I dont see where anyone suggested that. you can either run your cluster using the subscription repo (slower, stable) or the no-sub repo (quicker, less, umm, stable.) in practice, the nosub repo is stable enough for production, certainly in mine...
ok, in that case you need to pay special attention to your network design.
You have, at MINIMUM, the following disparate network functions:
1. corosync
2. ceph public
3. ceph private
4. NFS payload
5. Internet/service network
6. BMC
comingling...
Ahh makes sense. will the nfs boot apply to the workloads deployed on this hypervisor? if so, dont bother with ceph at all at this stage, since you already have storage. Your hardware is perfectly adequate for workload performance but ceph on...
There is nothing "special" about the vxrail hardware; if the purpose of the exercise is to prove it "works" I can save you the trouble- it works.
The better question is, do you have a better description of the "concept" here? as others noted...
I suppose maybe I didnt understand what the problem was. I had understood you to not want to transfer backups when no change was present.
You could have started here and make the whole thread unnecessary. You dont actually NEED vzdump at all to...
The issue isnt EXACTLY the baseline virtual cpu model (although this comes to play too) but rather the presence/absence of specific feature flags and/or hw vulnerability mitigations.
The X86-vX models essentially are presets for flags, and...
You're trying to reinvent the wheel.
Moden backup strategies are differential, which means that they are
- content aware (via CBT)
- only transfer the changes
the "simple" vzdump process is not content aware, so you would have to resort to...
1. lxc.
2. "decent" speeds are very relative. your tps on this system will be abysmal in the best of cases, and will drop as soon as you start hammering the system.
3. running the container in unprivileged mode.
4. since all you need is...