I've been digging through the documents and many web searches and have a few more questions I haven't been able to get a solid answer to yet so I thought I'd post here again.
I have a working 3 PVE node w/ Ceph clustering working nicely.
1) I have setup HA and can successfully shutdown/reboot a PVE node with a HA migration over to another node. What I've observed is that the guests are shutdown cleanly, migrated and then started up on another appropriate node. Is there a way to have live migrations without shutdown/startup for these operations? I have confirmed that I can manually live migrate each machine but hoping that we can designate some or all guests to live migrate.
2) In the event of unexpected power/failure to a node all guest VMs are essentially unavailable until they are spun up onto another node but there doesn't appear to be any indication or status of which guests are affected and being migrated over. Is there a log or element in the UI that I can watch the status of a failed node's migrations?
3) I have several different types and sizes of disks. I want to have these as unique ceph pools and not combined into one single pool. I have found that I can create crush rules based on class (hdd, sdd, nvme) easily and also see that they could also be created by grouping specific OSDs into a rule. Anyone have some experience with or recommend a link that covers in ELI5 how to go about setting these up? For example, if I'm using 2TB 15k mechanical disks and 2TB 7200 mechanical disks. I would like to have two pools to arrange storage for certain priority guests so to speak.
I have a working 3 PVE node w/ Ceph clustering working nicely.
1) I have setup HA and can successfully shutdown/reboot a PVE node with a HA migration over to another node. What I've observed is that the guests are shutdown cleanly, migrated and then started up on another appropriate node. Is there a way to have live migrations without shutdown/startup for these operations? I have confirmed that I can manually live migrate each machine but hoping that we can designate some or all guests to live migrate.
2) In the event of unexpected power/failure to a node all guest VMs are essentially unavailable until they are spun up onto another node but there doesn't appear to be any indication or status of which guests are affected and being migrated over. Is there a log or element in the UI that I can watch the status of a failed node's migrations?
3) I have several different types and sizes of disks. I want to have these as unique ceph pools and not combined into one single pool. I have found that I can create crush rules based on class (hdd, sdd, nvme) easily and also see that they could also be created by grouping specific OSDs into a rule. Anyone have some experience with or recommend a link that covers in ELI5 how to go about setting these up? For example, if I'm using 2TB 15k mechanical disks and 2TB 7200 mechanical disks. I would like to have two pools to arrange storage for certain priority guests so to speak.