Search results

  1. L

    [SOLVED] How to fix corosync Retransmit list ?

    Thank you , after I change the config_version: 26 to 27 , this problem sloved.
  2. L

    All nodes in cluster have grey question marks except one

    Hey man, I have the same problem with you. This How I fix it, I have a cluster have 39 nodes, always not stable with question mark or reboot. 1. Close all of this node. 2. Start 3 node first, then after some minute start the rest one by one
  3. L

    [SOLVED] How to fix corosync Retransmit list ?

    2 of my nodes have this showing in syslog Sep 16 15:38:17 backupkvm01 corosync[2184]: [TOTEM ] Retransmit List: 1a Sep 16 15:38:17 backupkvm01 corosync[2184]: [TOTEM ] Retransmit List: 1b Sep 16 15:38:17 backupkvm01 corosync[2184]: [TOTEM ] Retransmit List: 21 22 23 Sep 16 15:38:17...
  4. L

    Reboot 1 node all other 38 nodes into unknow status

    corosync.conf root@g8kvm03:~# cat /etc/pve/corosync.conf logging { debug: off to_syslog: yes } nodelist { node { name: g8kvm01 nodeid: 1 quorum_votes: 1 ring0_addr: 10.0.141.1 ring1_addr: 192.168.141.1 } node { name: g8kvm02 nodeid: 6 quorum_votes: 1...
  5. L

    Reboot 1 node all other 38 nodes into unknow status

    pvecm status root@g8kvm03:~# pvecm status Cluster information ------------------- Name: AW-G8-KVM Config Version: 56 Transport: knet Secure auth: on Quorum information ------------------ Date: Mon Sep 14 09:03:37 2020 Quorum provider: corosync_votequorum...
  6. L

    Reboot 1 node all other 38 nodes into unknow status

    help ! it happen How to fix this ? 1. reboot 1 nodes cluster into unknows status 2. The reboot node show pve-ticket invaiid
  7. L

    [SOLVED] After change Ceph cluster network, OSD is listening on both public & cluster network

    I've change the /etc/pve/ceph.conf , in the begining the are in the same network 10.0.141.x/24, now I change ceph cluster to 10.98.141.0/24 [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.98.141.0/24...
  8. L

    Cluster die after adding the 39th node! Proxmox is not stable!

    Hi, I have add the new ceph cluster network, do I want to restart in all nodes ? Can I just place the commamd to all 39 nodes? systemctl restart ceph.target
  9. L

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    One more question, do you know why no HA enable(no grooups, no resource) , restart the corosync could make the cluseter reboot ?
  10. L

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    Umount the folder is ok, Thank you very much !!!!!! Another question, do you know if ceph has ackage updated, does it need the reboot the node ?
  11. L

    Cluster die after adding the 39th node! Proxmox is not stable!

    Thank you I will try that. But it is difficult for me, there are 39 nodes.
  12. L

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    Reboot the node is ok. But I still have many node over 10 nodes have this problem. Reboot is the only option?
  13. L

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    I can not enter the folder, even I type chmod 755 cephfs, no use root@g8kvm37:/mnt/pve# root@g8kvm37:/mnt/pve# cd cephfs -bash: cd: cephfs: Permission denied root@g8kvm37:/mnt/pve# ls -al ls: cannot access 'cephfs': Permission denied total 8 drwxr-xr-x 3 root root 4096 Sep 4 17:27 . drwxr-xr-x...
  14. L

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    root@g8kvm37:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup lvmthin: local-lvm thinpool data vgname pve content images,rootdir rbd: G8KvmData content rootdir,images krbd 0 pool G8KvmData cephfs: cephfs path /mnt/pve/cephfs...
  15. L

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    It seems no errors, the log show this log frequency , the Ceph show health is OK, but run " df -h " hand there. root@g8kvm37:~# ras-mc-ctl --summary No Memory errors. No PCIe AER errors. No Extlog errors. No MCE errors. root@g8kvm37:~#
  16. L

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    My cluster comes error as below, any one who can help. Sep 08 18:18:12 g8kvm13 pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable Sep 08 18:18:22 g8kvm13 pvestatd[1873]: got timeout Sep 08 18:18:22 g8kvm13 pvestatd[1873]: unable to...
  17. L

    Cluster die after adding the 39th node! Proxmox is not stable!

    Is there a same way to change the ceph network? I want to put ceph into another vlan. So thoses traffice would not in the same vlan, might be slove my problem.
  18. L

    Cluster die after adding the 39th node! Proxmox is not stable!

    Before apply to the production envirment I deploy to the Test envirement first, is it right? root@backupkvm05:~# pvecm status Cluster information ------------------- Name: BackupKvm Config Version: 13 Transport: knet Secure auth: on Quorum information...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!