ok, in that case you need to pay special attention to your network design.
You have, at MINIMUM, the following disparate network functions:
1. corosync
2. ceph public
3. ceph private
4. NFS payload
5. Internet/service network
6. BMC
comingling...
Ahh makes sense. will the nfs boot apply to the workloads deployed on this hypervisor? if so, dont bother with ceph at all at this stage, since you already have storage. Your hardware is perfectly adequate for workload performance but ceph on...
There is nothing "special" about the vxrail hardware; if the purpose of the exercise is to prove it "works" I can save you the trouble- it works.
The better question is, do you have a better description of the "concept" here? as others noted...
I suppose maybe I didnt understand what the problem was. I had understood you to not want to transfer backups when no change was present.
You could have started here and make the whole thread unnecessary. You dont actually NEED vzdump at all to...
The issue isnt EXACTLY the baseline virtual cpu model (although this comes to play too) but rather the presence/absence of specific feature flags and/or hw vulnerability mitigations.
The X86-vX models essentially are presets for flags, and...
You're trying to reinvent the wheel.
Moden backup strategies are differential, which means that they are
- content aware (via CBT)
- only transfer the changes
the "simple" vzdump process is not content aware, so you would have to resort to...
1. lxc.
2. "decent" speeds are very relative. your tps on this system will be abysmal in the best of cases, and will drop as soon as you start hammering the system.
3. running the container in unprivileged mode.
4. since all you need is...
your use case is more suited for snapraid or unraid. these dont integrate with pve in any way, but they will work normally like any directory type store.
I misspoke.
PVE does not have any such functionality. What I MEANT was to do it at the local debian level (and since thats where the entirety of the pve stack resides it was convenient shorthand)
there are a bunch of tutorials for "setting up...
Yes. In your use case, you have the option to have samba served either at the PVE level or in a container/vm. having PVE serve samba works well in a homelab environment, BUT you will need to manage users and access by hand. having a "NAS" distro...
Ceph is the only "all features" supported shared solution easily available for PVE. Its also the most heavily worked on for other Virt platforms such as XCP-NG and various flavors of kvm. Your decision tree going forward heavily depends on WHY...
As I mentioned before, options 1 and 2 are available to you. option 3 is not, at least not in a non hackey way. options 1 and 2 work the same no matter whether you choose to use one, the other, or both, and dont interact/interfere with each other...
ZFS over iSCSI in PVE is a specific scheme that implies:
a) There is an external storage device
b) You can login to this server via SSH as root (this excludes Dell, Netapp, Hitachi, etc)
c) There is internal storage in that server (HDD, SAS...
I see. so you really do have two independent iscsi hosts (targets.) There should not be any conflict between the two pools.
post logs (dmesg, iscsiadm -m session, lsblk, etc) for the host with the failure and we can go from there.
a SAN would be a shared device of some sort. when you say you have "two" SANs, do you mean you have two boxes serving independent iscsi luns, or two boxes in a failover capacity (meaning one set of luns?)
In either case, this becomes a simple...
post the contents of /etc/network/interfaces for all 3 nodes. also double check each machine's hosts file and make sure they contain the same records for all 3 machines, and that they all match the output of hostname for each.
I feel you. this leaves you with 2 options-
1. do not use pveceph. which you totally CAN
2. open a feature suggestion here: https://bugzilla.proxmox.com/enter_bug.cgi
Your issues are larger then can be corrected with dpkg/apt. you have two choices here:
1. regress EVERYTHING you did on this system (software installed, kernel line arguments, etc) until you can have that command complete without blowing up your...