Cluster down, LVM's off line cluster not ready - no quorum? (500)

mehhos

Member
Jan 27, 2021
8
0
6
56
In the GUI I can see in Storage that shared and enable but I cannot start KVM. Proxmox lost connection to Storage (External disks) after connections was up I get

cluster not ready - no quorum? (500)
I've 6 nodes but in the gui I can see kvm only the one I'm log to other nodes are red

Max PV 0
Cur PV 1
Act PV 1
VG Size 3.00 TiB
PE Size 4.00 MiB
Total PE 786431
Alloc PE / Size 675840 / 2.58 TiB
Free PE / Size 110591 / 432.00 GiB
VG UUID Y63Opd-m4bI-3B9C-scMx-yITh-eNuM-E61uyb

--- Volume group ---
VG Name vsrappcluster1vg01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 145
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 16
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.00 TiB
PE Size 4.00 MiB
Total PE 524287
Alloc PE / Size 486656 / 1.86 TiB
Free PE / Size 37631 / 147.00 GiB
VG UUID 5u9Ti5-ynkI-qEdz-3SRs-NYMs-IcCz-ci5Psa

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1941
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 837.50 GiB
PE Size 4.00 MiB
Total PE 214399
Alloc PE / Size 210350 / 821.68 GiB
Free PE / Size 4049 / 15.82 GiB
VG UUID 5s1lCz-5sbJ-aWE0-CX7T-9p4q-yetK-jCLo4O

--- Logical volume ---
LV Path /dev/vsrappcluster1vg01/vm-444-disk-1
LV Name vm-444-disk-1
VG Name vsrappcluster1vg01
LV UUID ReK8cK-7zUF-dBcH-4gqw-N4q0-0A0n-mPUrMJ
LV Write Access read/write
LV Creation host, time vsr-app4, 2023-03-10 11:15:28 +0100
LV Status NOT available
LV Size 200.00 GiB
Current LE 51200
Segments 2
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID okSgJX-Ncff-UGJL-43bR-gUvv-saW5-sMPIyZ
LV Write Access read/write
LV Creation host, time proxmox, 2017-09-01 13:00:59 +0200
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:1

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID B4dnoh-CRHv-WkN4-vx8k-8ncC-gqJ3-mELuE2
LV Write Access read/write
LV Creation host, time proxmox, 2017-09-01 13:00:59 +0200
LV Status available
# open 1
LV Size 96.00 GiB
Current LE 24576
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:0

--- Logical volume ---
LV Name lvmthinpool
VG Name pve
LV UUID EkqcIx-0KFg-UcPI-crvB-3vre-wvw9-rt3OkF
LV Write Access read/write
LV Creation host, time proxmox, 2017-09-01 13:01:00 +0200
LV Pool metadata lvmthinpool_tmeta
LV Pool data lvmthinpool_tdata
LV Status available
# open 2
LV Size 717.50 GiB
Allocated pool data 3.26%
Allocated metadata 2.06%
Current LE 183680
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:4

--- Logical volume ---
LV Path /dev/pve/vm-130-disk-1
LV Name vm-130-disk-1
VG Name pve
LV UUID 4a9rbd-xNFc-xmem-gA5s-81bN-FgZI-Gt2Jo9
LV Write Access read/write
LV Creation host, time vsr-app1, 2021-01-26 12:38:27 +0100
LV Pool name lvmthinpool
LV Status available
# open 0
LV Size 710.00 GiB
Mapped size 3.30%
Current LE 181760
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:6
 
cluster not ready - no quorum? (500)
I've 6 nodes but in the gui I can see kvm only the one I'm log to other nodes are red
Sounds like network issues.
Basically, if the cluster/node cannot establish quorum (like e.g. it loses its network connection to the other nodes), PVE is rendered read-only, to prevent split-brain situations.

Please check your network if there are issues, e.g. a basic ping between all nodes must work, for a start.
Can you then also post the output of pveversion -v and pvecm status?
 
First of all, try accessing via the web all the nodes to understand where the cluster split occurred.

This usually happens when Corosync experiences communication issues.

If you see a red dot with an "X," and your server is powered on, it means that Corosync is having problems. Also, check the status with "pvestatd status."

1690466262416.png

In the excample above, you can see that the split occured between pve1 and pve2

MM
 
I had this error with me and this solution helped me a lot:
scp that files from the node that working fine in your cluster to the node that have an issues such as (proxmox no quorum 500)
scp -r /etc/corosync/* root@xx.xx.xx.xx:/etc/corosync/
scp /etc/pve/corosync.conf root@xx.xx.xx.xx:/etc/pve/
systemctl restart pve-cluster
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!