Hi guys.
for test purposes I've settled a small cluter with two nodes and one quorum device.
Everything is working pretty well, also the live migration in case of fault.
Min/Max Replicas: 2
Max restart: 1
Max relocate: 10
KVM HW virtualization: enabled
What I think is bad is the time that proxmox requires to restart the VM on a different node: it' really requires a lot of time and it could be not acceptable in production.
Any idea?
Last question:
The two nodes are equally equipped with 8c/16t CPU, 128GB RAM and three SSD disks. On the first one I've inslalled Proxmox, the remaining two disks on each node are used for CEPH.
Is this the best I can do? Is it true that ceph provides proper redundancy across nodes and the node itself?
for test purposes I've settled a small cluter with two nodes and one quorum device.
Everything is working pretty well, also the live migration in case of fault.
Min/Max Replicas: 2
Max restart: 1
Max relocate: 10
KVM HW virtualization: enabled
What I think is bad is the time that proxmox requires to restart the VM on a different node: it' really requires a lot of time and it could be not acceptable in production.
Any idea?
Last question:
The two nodes are equally equipped with 8c/16t CPU, 128GB RAM and three SSD disks. On the first one I've inslalled Proxmox, the remaining two disks on each node are used for CEPH.
Is this the best I can do? Is it true that ceph provides proper redundancy across nodes and the node itself?