[SOLVED] HA status none

RobertCyberS

New Member
Feb 7, 2024
9
0
1
Hello,

I have just created a cluster and ceph in Proxmox VE, but it the HA doesn't work becouse it says status none even when i add it to HA. I can't even migrate it to other nodes. How can i fix it? Also i have never checked before if it's normal that Proxmox automatically creates LVM when i make ceph pool. Is it, becouse it doesn't let me create a OSD? When it creates a LVM at the start i delete it to make ceph OSD, but after i create OSD it also creates LVM. What could couse it not being HA?


it should be ok:

Ohvwz_Ruoe_FXbjOg9h8I_11vHLgo-2hfM2ehb146GkmH19jCFou8oJIib61pDEe6BWCTrgPlvrx_1_0pezcyvQcRC83XXghICLG3MDIqYmf7kvkIO24CEy0Ce2hJuD-UYK1VIW4zzJpS8ciWr12fm0



It says status none after i set it to started:

1707725850908.png


best regards.
 
Also i have never checked before if it's normal that Proxmox automatically creates LVM when i make ceph pool. Is it, becouse it doesn't let me create a OSD? When it creates a LVM at the start i delete it to make ceph OSD, but after i create OSD it also creates LVM.
The layout of the OSD is an LVM, but you shouldn't touch that either. You create a CEPH pool, integrate it and put the virtual disk in the CEPH pool and not directly in the LVM.

Please Post output of pct config 100 and the Content of /etc/pve/storage.cfg.
 
The layout of the OSD is an LVM, but you shouldn't touch that either. You create a CEPH pool, integrate it and put the virtual disk in the CEPH pool and not directly in the LVM.

Please Post output of pct config 100 and the Content of /etc/pve/storage.cfg.
1707727739798.png
1707727774242.png
 
Hi,
please share the output of the following:
Code:
ha-manager status --verbose
pveversion -v
journalctl -b -u pve-ha-crm.service -u pve-ha-lrm.service
The latter two on all nodes.
 
Hi,
please share the output of the following:
Code:
ha-manager status --verbose
pveversion -v
journalctl -b -u pve-ha-crm.service -u pve-ha-lrm.service
The latter two on all nodes.
I have figured the fix, but thanks for replying. The fix was to clean all the disks on all nodes, as i have installed Proxmox multiple times on all nodes trying to figure it out.