Proxmoc CT not able to start after shutting down the node

tejasthakur123

New Member
Oct 4, 2023
29
1
3
Currently, I have 2 nodes of proxmox cluster with a quorum device as Raspberry Pi ( on which I am running openmediavault as well)

I created an HA group and added the CT,
to test the HA
I just shut the node the CT migrated to another node however they failed to start.

Action Taken
Removed from HA configuration
and tried to start but showing
>>command 'ha-manager migrate ct:101 proxmox002' failed: exit code 255
and showing no such logical volume pve/vm/101-disk-0

I am pretty much sure this is because of the inability to replicate the local-lvm disk across the node


Question
is there any way in which I can set the replication for the local-lvm ?

also tried to create replication job

and its showing following error

missing replicate feature on volume 'local-lvm:vm-100-disk-0' (500)





Current configuration :
1696644962337.png
 
Hi,
Question
is there any way in which I can set the replication for the local-lvm ?
unfortunately, currently only ZFS supports replication. Shared storage is recommended, because when using HA with replication, you can lose data since the last replication in case of a failure.
 
Thanks Fiona,

I am thinking about CEPH filesystem, order couple of flash drive ( no more hdd/ssd left))
I hope that will work


Tejas
 
No, Ceph needs 3 nodes. It's just going to be painful and you will have issues otherwise. Having a QDevice for cluster quorum is fine, but Ceph is designed for at least 3 actual nodes.
 
Ciao,
i have the same scenario and conf, and please... stop telling us ceph needs 3 nodes: WE KNOW IT!
we just want to have it working as it is, because it's a lab

please, let us know which is the correct parameter on ceph.conf to automatically mark OSD and host down with this configuration, as it is not recommended...but working!

If i power off one node, host and OSD are marked down, quorum from PI and other node works well and my VMs are getting up on the other node without problems

if i "dirty" power off one node, this doesn't happen, as there is a really high timeout in this (host and OSD are still marked up after a long period of time)

please. let us just know that parameter and the right timeout to apply.
 
Last edited:
Ciao,
i have the same scenario and conf, and please... stop telling us ceph needs 3 nodes: WE KNOW IT!
we just want to have it working as it is, because it's a lab

please, let us know which is the correct parameter on ceph.conf to automatically mark OSD and host down with this configuration, as it is not recommended...but working!

If i power off one node, host and OSD are marked down, quorum from PI and other node works well and my VMs are getting up on the other node without problems

if i "dirty" power off one node, this doesn't happen, as there is a really high timeout in this (host and OSD are still marked up after a long period of time)

please. let us just know that parameter and the right timeout to apply.
Well you might know it, but not everybody does. So I'm just warning people about the pitfalls that come with it. If you use software in ways it's not designed to be used, I won't bother trying to reproduce and search for the information. I do not know what configuration you need from the top of my head...But this is an open community forum, so everybody who wants can help you.
 
  • Like
Reactions: andlil
Well you might know it, but not everybody does. So I'm just warning people about the pitfalls that come with it. If you use software in ways it's not designed to be used, I won't bother trying to reproduce and search for the information. I do not know what configuration you need from the top of my head...But this is an open community forum, so everybody who wants can help you.
Ciao Fiona, your answer is right :) but my answer is just a ceph.conf line away
 
Hello Spewk,

so what is the alternative?

I am thinking of NFS and host rootfs on that but it is again dependent on 3rd third-party service provider not within proxmox.

as you mentioned if I shutdown the node "dirty" ( just press power button)
I see my container migrated to another node however the filesystem missing and that how I failed to start the node.

just wanted to find a solution to that.


Tejas
 
Hello Spewk,

so what is the alternative?

I am thinking of NFS and host rootfs on that but it is again dependent on 3rd third-party service provider not within proxmox.

as you mentioned if I shutdown the node "dirty" ( just press power button)
I see my container migrated to another node however the filesystem missing and that how I failed to start the node.

just wanted to find a solution to that.


Tejas
Ciao Tejas,
me too, i want to find out the damn parameter for ceph.conf to have a useful HA

i'm studying the CEPH Monitor/OSD interaction but i still haven't found out the right conf for that
as i said.... it's just a line away.... we just need to find out the right one
 
Ciao Tejas,
me too, i want to find out the damn parameter for ceph.conf to have a useful HA

i'm studying the CEPH Monitor/OSD interaction but i still haven't found out the right conf for that
as i said.... it's just a line away.... we just need to find out the right one


Got it
Keep us posted here
 
I am thinking of NFS and host rootfs on that but it is again dependent on 3rd third-party service provider not within proxmox.
You could go for ZFS with replication.
 
however in this case the storage will be on a single node what if that node goes down?
That's why you'd need to set up replication, so that the data is present on both nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!