add node have error

haiwan

Well-Known Member
Apr 23, 2019
249
1
58
37
hi
you check image
before we add success a Cluster
but later we use public ip join in old Cluster . you see have error.
we want remove out . but we use pvecm nodes no show this pve5 node.
and we found join information have no click.
 

Attachments

  • 微信截图_20190619210824.png
    微信截图_20190619210824.png
    104.2 KB · Views: 3
  • 微信截图_20190619210837.png
    微信截图_20190619210837.png
    40.1 KB · Views: 3
  • 微信截图_20190619211005.png
    微信截图_20190619211005.png
    91.7 KB · Views: 3
hi,

can you send the output of:

* pvecm status
* systemctl status pve-cluster
* systemctl status corosync

from 1 working & 1 non-working node

EDIT:

oh and also:

* pveversion -v
 
hi,

can you send the output of:

* pvecm status
* systemctl status pve-cluster
* systemctl status corosync

from 1 working & 1 non-working node

EDIT:

oh and also:

* pveversion -v
sorry,
we have renew install non-working node.
but old cluster still show this pve5
 

Attachments

  • 微信截图_20190619215611.png
    微信截图_20190619215611.png
    62.6 KB · Views: 1
  • 微信截图_20190619215715.png
    微信截图_20190619215715.png
    157.2 KB · Views: 1
  • 微信截图_20190619215809.png
    微信截图_20190619215809.png
    180.6 KB · Views: 1
hi,

can you send the output of:

* pvecm status
* systemctl status pve-cluster
* systemctl status corosync

from 1 working & 1 non-working node

EDIT:

oh and also:

* pveversion -v
Code:
root@pve2:~# pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-35
ceph: 12.2.12-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-50
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-41
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-25
pve-cluster: 5.0-36
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-19
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-3
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-50
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
hi,

can you send the output of:

* pvecm status
* systemctl status pve-cluster
* systemctl status corosync

from 1 working & 1 non-working node

EDIT:

oh and also:

* pveversion -v
we have resolved.
but we new add node and add new osd
ceph notice
4 slow requests are blocked > 32 sec. Implicated osds 3,5
1 ops are blocked > 131.072 sec
3 ops are blocked > 32.768 sec
osd.3 has blocked requests > 32.768 sec
osd.5 has blocked requests > 131.072 sec

Code:
Reduced data availability: 20 pgs inactive, 21 pgs peering
pg 3.8 is stuck peering for 73.779606, current state peering, last acting [6,1,5]
pg 3.15 is stuck peering for 93.785466, current state peering, last acting [7,4,3]
pg 3.18 is stuck peering for 72854.007448, current state peering, last acting [6,4,1]
pg 3.1d is stuck peering for 93.785345, current state peering, last acting [7,2,4]
pg 3.1f is stuck peering for 96.960759, current state peering, last acting [7,0,3]
pg 3.2a is stuck peering for 93.785449, current state peering, last acting [7,3,4]
pg 3.2c is stuck peering for 73.779396, current state peering, last acting [6,2,5]
pg 3.2f is stuck peering for 93.785385, current state peering, last acting [7,4,2]
pg 3.32 is stuck peering for 73.779685, current state peering, last acting [6,5,2]
 

Attachments

  • 微信截图_20190621130657.png
    微信截图_20190621130657.png
    31.3 KB · Views: 0
  • 微信截图_20190621130729.png
    微信截图_20190621130729.png
    61.3 KB · Views: 0
  • 微信截图_20190621130741.png
    微信截图_20190621130741.png
    56.8 KB · Views: 0