I rent a dedicated server in remote datacenter and now hard disk is failing.
So hard disk is going to be replaced and old disk will be attached for few hours.
Since my storage is over 80% I can't backup my lxc container with biggest disk(about 3tb) on local disk.
For small containers, I...
Main VM in our setup is ERP database. I have an irrational fear about bit rot. Running it on ZFS gives me some confidence. I've never trusted Hardware/Software RAID.
I have not made any progress yet on lab system as it has limited memory and had some problem due to lack of...
After reading some documentation, I have finally configured DRBD9 on ZFS with Proxmox 6 on two nodes in lab.
Now my only concern is it's not officially supported by Proxmox and possibly very few user-base compared to Ceph.
Update: Live migration and storage replication network is needs to be configured separately and it's not related to corosync network.
I thought link0 would be used "storage replication" and migration. I set 10.0.0.x network as only connection for corosync but when I migrate VMs or create "storage replication", it still goes through 192.168.1.x network.
How can I make migration, storage replication go through specific...
I have tested in lab environment today.
It works as expected, however most burdensome things is manually migrating every vm/lxc as guided here.
I have configured two node cluster using web gui. I set up replication from pve1 -> pve2.
In cluster config I setup link0 as main link, however when I test it's using link1.
I downloaded 500mb file on virtual machine in pve1 and measured bandwith using iftop. It's definitely using link1...
So I am not going to mix nvme + sata SSD in CEPH. That rules out option 1.
I have not read about drbd9. Thanks for suggesting it seems very good option for realtime HA. I will read further about it.
What do you mean by "never get to 10 min downtime". Do you mean I will almost...
Hello, I have been running 1U Xeon e5 2620 v4 CPU with 4 Sata SSD(Adata SSD 1tb SSD, very slow for ZFS:( ) configured on ZFS mirrored stripe. It run well for me but it doesn't have enough IO for my VM needs.
So I recently we purchased Amd Epyic 7351p with 8 NVME ssd (Intel P4510 1TB) to solve...
I have windows guest and after power cut, My Windows guest doesn't boot and gives BSOD. Then it entered to recovery mode. When write "list disk" in diskpart, it says "there are no fixed disks to show"
I have ZFS Mirrored Striped with ZIL without UPS.
Is there way to fix this?