Search results

  1. L

    [SOLVED] How to fix corosync Retransmit list ?

    2 of my nodes have this showing in syslog Sep 16 15:38:17 backupkvm01 corosync[2184]: [TOTEM ] Retransmit List: 1a Sep 16 15:38:17 backupkvm01 corosync[2184]: [TOTEM ] Retransmit List: 1b Sep 16 15:38:17 backupkvm01 corosync[2184]: [TOTEM ] Retransmit List: 21 22 23 Sep 16 15:38:17...
  2. L

    Reboot 1 node all other 38 nodes into unknow status

    help ! it happen How to fix this ? 1. reboot 1 nodes cluster into unknows status 2. The reboot node show pve-ticket invaiid
  3. L

    [SOLVED] After change Ceph cluster network, OSD is listening on both public & cluster network

    I've change the /etc/pve/ceph.conf , in the begining the are in the same network 10.0.141.x/24, now I change ceph cluster to 10.98.141.0/24 [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.98.141.0/24...
  4. L

    [SOLVED] pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable

    My cluster comes error as below, any one who can help. Sep 08 18:18:12 g8kvm13 pvestatd[1873]: unable to activate storage 'cephfs' - directory '/mnt/pve/cephfs' does not exist or is unreachable Sep 08 18:18:22 g8kvm13 pvestatd[1873]: got timeout Sep 08 18:18:22 g8kvm13 pvestatd[1873]: unable to...
  5. L

    Cluster die after adding the 39th node! Proxmox is not stable!

    My cluster have 38 nodes with ceph, yesterday, I added the 39th node, the hole cluster die!!!! I don't have HA enabled, I think the node should not reboot, but my cluserter has 2 node reboot, and the cluster spit to different cluster quorum, like node 1, 3, 5, 7 , another is 2,4, 6,8...
  6. L

    [SOLVED] How to restore vm to different Proxmox Cluster?

    I have 2 cluster, using the same PBS to backup everyday, here is my question: 1. Why only see one snapshot? root@pbsg8:~# proxmox-backup-client snapshots --repository 10.0.142.0:pbs-data-g8...
  7. L

    [SOLVED] compress error

    When I choose ZSTD for backup job, it shows below error, but job still can continue, choose LZO, GZIP have no error. Some errors have been encountered: kvm17: Parameter verification failed. (400) compress: value 'zstd' does not have a value in the enumeration '0, 1, gzip, lzo' kvm07...
  8. L

    [SOLVED] One of the node backup job fail

    My cluster has 18 nodes, all update to 6.2-11, but while I backup the vm in node 17, show the following error, other nodes ok! TASK ERROR: could not get storage information for 'pbs-backup-server': can't use storage type 'pbs' for backup What I try: 1. Reboot service: pvedaemon pveproxy...
  9. L

    how to slove corosync [1783081]: [TOTEM ] Retransmit List

    Every second has the log bellow, restart server or corosync have no use. Aug 20 10:49:13 kvm01 corosync[1783081]: [TOTEM ] Retransmit List: f30a4 Aug 20 10:49:13 kvm01 corosync[1783081]: [TOTEM ] Retransmit List: f30a5 Aug 20 10:49:13 kvm01 corosync[1783081]: [TOTEM ] Retransmit...
  10. L

    Proxmox 6.2-4 cluster die!!!Node auto reboot!! need help!!

    My cluserter have 33 nodes with ceph, cluster will reboot randomly by some of below operation: 1. systemctl restart corosync 2. add a new node into cluster 3. reboot one of the node How to stop server reboot automatic ? ????? This is a production environment, I really have no idea. What I...