your use case is more suited for snapraid or unraid. these dont integrate with pve in any way, but they will work normally like any directory type store.
I misspoke.
PVE does not have any such functionality. What I MEANT was to do it at the local debian level (and since thats where the entirety of the pve stack resides it was convenient shorthand)
there are a bunch of tutorials for "setting up...
Yes. In your use case, you have the option to have samba served either at the PVE level or in a container/vm. having PVE serve samba works well in a homelab environment, BUT you will need to manage users and access by hand. having a "NAS" distro...
Ceph is the only "all features" supported shared solution easily available for PVE. Its also the most heavily worked on for other Virt platforms such as XCP-NG and various flavors of kvm. Your decision tree going forward heavily depends on WHY...
As I mentioned before, options 1 and 2 are available to you. option 3 is not, at least not in a non hackey way. options 1 and 2 work the same no matter whether you choose to use one, the other, or both, and dont interact/interfere with each other...
ZFS over iSCSI in PVE is a specific scheme that implies:
a) There is an external storage device
b) You can login to this server via SSH as root (this excludes Dell, Netapp, Hitachi, etc)
c) There is internal storage in that server (HDD, SAS...
I see. so you really do have two independent iscsi hosts (targets.) There should not be any conflict between the two pools.
post logs (dmesg, iscsiadm -m session, lsblk, etc) for the host with the failure and we can go from there.
a SAN would be a shared device of some sort. when you say you have "two" SANs, do you mean you have two boxes serving independent iscsi luns, or two boxes in a failover capacity (meaning one set of luns?)
In either case, this becomes a simple...
post the contents of /etc/network/interfaces for all 3 nodes. also double check each machine's hosts file and make sure they contain the same records for all 3 machines, and that they all match the output of hostname for each.
I feel you. this leaves you with 2 options-
1. do not use pveceph. which you totally CAN
2. open a feature suggestion here: https://bugzilla.proxmox.com/enter_bug.cgi
Your issues are larger then can be corrected with dpkg/apt. you have two choices here:
1. regress EVERYTHING you did on this system (software installed, kernel line arguments, etc) until you can have that command complete without blowing up your...
ceph usage gives you raw utilization numbers. if you use ceph df detail it will give you actual statistics including compression (which I suspect are the cause for the difference between du and df)
-cpu x86-64-v3,... and -cpu host,... are not the same thing. x86-64-v3 is a named CPU model / ABI baseline, while host is host passthrough. In QEMU terms, named models expose a predefined, stable feature set, and host passthrough exposes the host...
This can be done without any downtime. and yes, you want to make sure that the running ceph version is available on the next distro- so make sure before you start you upgrade your ceph to squid. This process is non disruptive and will not...
that should be your first option ;)
PVE8 is not EOL yet, and even when it is (this august) you can keep running for some time after. might be the better option, especially if you dont need anything pve9 specific. This will give you time to...
so here's what I'd suggest-
dont use 10.13.30.x for corosync at all.
assign arbitrary addresses to bond0 and bond1; --edit- ON DIFFERENT SUBNETS. ideally, they should be on seperate vlans too. use those addresses as ring0 and ring1.