Separate names with a comma.
Gluster is very easy to setup , and performs okay with 3 nodes(better than CEPH,that is).
For only 3 hosts,i would recommend Gluster
What is your backup destination(nas,smb?)
I usually use nagios for RAID monitoring, but first you would need to install megacli for that to work.
I don't have any alignment on LVM for all my servers,and currently dont have any problems with them.
If you're scared about corruption,get an ups...
I would recommend thin-lvm for VM/CT
Add disk,migrate data,rebuild grub(if linux) ,and reboot and test.
This is zfs limitation,not replication problem.
I use VLANs ,and vlan_aware interfaces, but don't have HA.
Currently it's working okay with 5 servers in cluster.
I did export the VM as vhd(ova) and reimport them with qm importdisk . Works okay,except for PV
This is what i would do, add another disk,mount any linux,recreate partitions and cp -a from / and all other partitions.
Rebuild initramfs and...
SystemD is bad,but SysVinit is deprecated(at least for me).They should have gone with either OpenRC(In Gentoo it works flawlessly for thousands of...
I've had a same problem on Samsung 750SSD,try to wipe them
Dont go with rufus,use imageusb. This is what i do.
My solution is to find where the inode is fillling:
for i in /*; do echo $i; find $i |wc -l; done
And the go in deep to find problematic folder.
I also dont like systemd, but we have to realistic and say that almost everybody is already on systemd(except Gentoo,Slack,Void and Alpine i...
I used the disks at had at disposal :D
Why are you using drbd and not gluster? To me gluster looks like a better solution.
I've just tested my node. Amd 5370,8gbRam,2x500gb Raid0 max 35w, average around 25w for the whole machine.Pretty nice.