Virtualization makes the most sense when you use it to break down work into small chunks. Imagine trying to fit tetris pieces that are 4 squares, and then you have a piece that is 48 squares in size. If your use case is really that big, the only...
pbs storage isnt like NFS; if its not present it will not hang your host. I would investigate your logs a bit more carefully to see what the actual culprit is- perhaps you have other items in /etc/fstab and/or /etc/pve/storage.cfg?
what @bbgeek17 suggested should have been the first thing Veeam support instruct you to do. That kind of experience with their support (and other things) have led me to abandon using Veeam with PVE.
I recommend that you figure out a curl based way to upload a file to local storage with the same account that Veeam is using. Run it local to PVE first, if that works - run it from the Veeam network segment. If that works, convert it to PS...
vmbr0 and vmbr1 are taken from your existing configuration... if they dont work, you have bigger problems.
iface bond0.661 inet static
...etc
You keep using that term. I'm unfamiliar with such a topology- is it ethernet?
The physical layer bringing your interfaces to your hosts isnt the relevent factor, its how you manage your logical networking configuration.
I assume your ethernet interfaces you describe are 2x 25gbit and 2x 1G, so you have 4 interfaces (the...
/etc/pve isnt a normal filesystem. its a special cfs that is kept in a database format, and gets distributed and synchronized in real time. you can read about it here: https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)
/etc/pve...
You no longer have a cluster. or at least not a functioning one. before proceeding to troubleshoot adding a node, do you want to rescue the existing cluster, or start from scratch? if you want to rescue the existing cluster, you need to establish...
from a node in the cluster and the node being added, post the content of:
/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (from a node already in the cluster)
you CAN have a gateway outside the scope of a subnet, BUT it will only work one way- traffic will route OUT to the gateway properly but incoming traffic will not.
OP, pay attention to what your PC address is and gateway. if its not in the same...
Again: In this thread only other community members are participating. If you don't want to purchase additional subscriptions for migration (which is understandable) but don't want to switch repos to non-enterprise (for whatever reasons) you need...
that is almost never so. ceph requires fast networks and SSDs to be performant. MD3200i doesnt even support anything faster then 1gb, and yet will yield more satisfying results at the cost of entry (likely free or next to free.) For your use case...
post logs from the storage and host for the timeperiod the drop occured. it would also be good to look at your /etc/network/interfaces file, as well as your network setting from the storage (not just ip but connection speed and mtu)
I'm guessing...
Nodes failing isnt the issue. the quorum device is meant as defense against a silo connectivity "tie". Anything short of a room losing connectivity would be handled normally without it.
There are MANY. A simple script of freeze, rdiff signature, rdiff delta, thaw would get you where you want to be. It would be harmless to run it nightly- hell, even every hour but you can also condition the run with a modified date check if that...
More modern than what? My hardware is usually 2-5 years old before replacement.
I wouldnt know. I have no use case for this.
Passing cpu=host to the guest allows Windows to enable vulnerabilty mitigation code (spectre, heartbleed, etc) which is...
https://learn.microsoft.com/en-us/windows-hardware/design/minimum/windows-processor-requirements
you CAN make Windows 11/2025 work with an older cpu, but the consequences are slow performance. no amount of hypervisor tweaking is going to fix that.
I dont see where anyone suggested that. you can either run your cluster using the subscription repo (slower, stable) or the no-sub repo (quicker, less, umm, stable.) in practice, the nosub repo is stable enough for production, certainly in mine...