Promox Cluster Promlem

Discussion in 'Proxmox VE: Installation and configuration' started by Gigaveri, Jul 19, 2016.

  1. Gigaveri

    Gigaveri New Member

    Joined:
    Jul 14, 2016
    Messages:
    23
    Likes Received:
    0
    Hello

    root@Giga-vds2:~# pvecm add 185.90.81.6
    this host already contains virtual machines - please remove them first


    promlem ?

    root@Giga-vds1:~# pvecm add 185.90.81.3
    authentication key already exists

    promlem ?

    helpppp
     
  2. Gigaveri

    Gigaveri New Member

    Joined:
    Jul 14, 2016
    Messages:
    23
    Likes Received:
    0
    root@test:~# pvecm add 185.90.81.3
    The authenticity of host '185.90.81.3 (185.90.81.3)' can't be established.
    ECDSA key fingerprint is 05:4d:f3:60:24:0f:b6:04:8c:2e:fe:51:e5:fd:d6:35.
    Are you sure you want to continue connecting (yes/no)? yes
    root@185.90.81.3's password:
    copy corosync auth key
    stopping pve-cluster service
    backup old database
    waiting for quorum...
     
  3. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    909
    Likes Received:
    89
    Can you tell a little more what your setup is and which steps you already executed? Was it a old cluster where you want to add a new node or did you rebuild the cluster?

    If you have VMs running on it and you are really sure that there is no conflict with the node you want to add (e.g. no VMID are the same, storage definitions do not conflict, ...)

    You may add the "--force" parameter to the "pvecm add" command, do that from the one where the "this host already contains virtual machines - please remove them first" error was thrown.
    E.g.:
    Code:
    pvecm add 185.90.81.6 --force
    
    But ensure first that the VMs from this node and the other do not conflict!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. Gigaveri

    Gigaveri New Member

    Joined:
    Jul 14, 2016
    Messages:
    23
    Likes Received:
    0

    Hello

    root@Giga-vds2:~# pvecm add 185.90.81.6 --force
    The authenticity of host '185.90.81.6 (185.90.81.6)' can't be established.
    ECDSA key fingerprint is 16:bb:b4:c7:bb:6d:f6:2b:41:a0:e8:33:ee:02:54:98.
    Are you sure you want to continue connecting (yes/no)? yes
    root@185.90.81.6's password:
    copy corosync auth key
    stopping pve-cluster service
    backup old database
    generating node certificates
    merge known_hosts file
    restart services
    successfully added node 'Giga-vds2' to cluster.
    root@Giga-vds2:~#


    3 virtual server virtual servers disappeared ?

    2016-07-19_1019.png




    root@Giga-vds1:~# pvecm status
    Quorum information
    ------------------
    Date: Tue Jul 19 10:21:05 2016
    Quorum provider: corosync_votequorum
    Nodes: 1
    Node ID: 0x00000001
    Ring ID: 8
    Quorate: No

    Votequorum information
    ----------------------
    Expected votes: 2
    Highest expected: 2
    Total votes: 1
    Quorum: 2 Activity blocked
    Flags:

    Membership information
    ----------------------
    Nodeid Votes Name
    0x00000001 1 185.90.81.6 (local)
    root@Giga-vds1:~#
     
  5. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    909
    Likes Received:
    89
    Please post the setup and the steps you already done before executing commands the next time, apparently there was some conflict.
    ALso the next time create the cluster with "pvecm create" onb the node which has already VMs there, makes it easier :)
    How are those two servers connected? LAN? If not multicast won't work and we have to do another approach for clustering (unicast).

    The add command generated a backup of the config files before merging the cluster file system you may use that to recover the configs.

    I'd now do the following:
    1) dismantle the cluster
    2) restore the config files
    3) describe the setup exactly here and the we look what went wrong and what you should do.

    For 1)
    Can be done on both nodes:
    Code:
    pvecm e 1
    rm /etc/pve/corosync.conf
    rm /etc/corosync/corosync.conf
    systemctl stop pve-cluster
    systemctl stop corosync
    rm /etc/corosync/authkey
    
    For 2)
    do this on the node where the VMs where
    Code:
    cd /var/lib/pve-cluster/
    mv config.db config.db.old
    # note that the backup file may have another name, should begin with config- though.
    gunzip -c backup/config-1448650048.sql.gz | sqlite3 config.db
    
    Then on both nodes start pve-cluster again:
    Code:
    systemctl start pve-cluster
    
    Now you should have both separate nodes working for their own.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    #5 t.lamprecht, Jul 19, 2016
    Last edited: Jul 19, 2016
  6. Gigaveri

    Gigaveri New Member

    Joined:
    Jul 14, 2016
    Messages:
    23
    Likes Received:
    0
    Hello

    Thansk.

    root@Giga-vds2:/var/lib/pve-cluster# systemctl status pve-cluster.service
    ‚óŹ pve-cluster.service - The Proxmox VE cluster filesystem
    Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled)
    Active: failed (Result: exit-code) since Tue 2016-07-19 11:38:03 EEST; 11min ago
    Process: 26614 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
    Process: 37277 ExecStart=/usr/bin/pmxcfs $DAEMON_OPTS (code=exited, status=255)
    Main PID: 26612 (code=exited, status=0/SUCCESS)

    Jul 19 11:38:03 Giga-vds2 pmxcfs[37277]: [database] crit: found entry with duplicate name (inode = 000000000000001A, parent = 0000000000000000, name = 'pve-root-ca.pem')
    Jul 19 11:38:03 Giga-vds2 pmxcfs[37277]: [database] crit: DB load failed
    Jul 19 11:38:03 Giga-vds2 pmxcfs[37277]: [database] crit: found entry with duplicate name (inode = 000000000000001A, parent = 0000000000000000, name = 'pve-root-ca.pem')
    Jul 19 11:38:03 Giga-vds2 pmxcfs[37277]: [database] crit: DB load failed
    Jul 19 11:38:03 Giga-vds2 pmxcfs[37277]: [main] crit: memdb_open failed - unable to open database '/var/lib/pve-cluster/config.db'
    Jul 19 11:38:03 Giga-vds2 pmxcfs[37277]: [main] crit: memdb_open failed - unable to open database '/var/lib/pve-cluster/config.db'
    Jul 19 11:38:03 Giga-vds2 pmxcfs[37277]: [main] notice: exit proxmox configuration filesystem (-1)
    Jul 19 11:38:03 Giga-vds2 systemd[1]: pve-cluster.service: control process exited, code=exited status=255
    Jul 19 11:38:03 Giga-vds2 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
    Jul 19 11:38:03 Giga-vds2 systemd[1]: Unit pve-cluster.service entered failed state.
    root@Giga-vds2:/var/lib/pve-cluster# gunzip -c backup/config-1468912548.sql.gz | sqlite3 config.db
    Error: near line 3: table tree already exists
    Error: near line 4: UNIQUE constraint failed: tree.inode
    Error: near line 5: UNIQUE constraint failed: tree.inode
    Error: near line 6: UNIQUE constraint failed: tree.inode
    Error: near line 7: UNIQUE constraint failed: tree.inode
    Error: near line 8: UNIQUE constraint failed: tree.inode
    Error: near line 9: UNIQUE constraint failed: tree.inode
    Error: near line 10: UNIQUE constraint failed: tree.inode
    Error: near line 11: UNIQUE constraint failed: tree.inode
    Error: near line 12: UNIQUE constraint failed: tree.inode
    Error: near line 13: UNIQUE constraint failed: tree.inode
    Error: near line 14: UNIQUE constraint failed: tree.inode
    Error: near line 15: UNIQUE constraint failed: tree.inode
    Error: near line 16: UNIQUE constraint failed: tree.inode
    Error: near line 17: UNIQUE constraint failed: tree.inode
    Error: near line 18: UNIQUE constraint failed: tree.inode
    Error: near line 19: UNIQUE constraint failed: tree.inode
    Error: near line 20: UNIQUE constraint failed: tree.inode
    Error: near line 21: UNIQUE constraint failed: tree.inode
    Error: near line 22: UNIQUE constraint failed: tree.inode
    Error: near line 23: UNIQUE constraint failed: tree.inode
    Error: near line 24: UNIQUE constraint failed: tree.inode
    Error: near line 25: UNIQUE constraint failed: tree.inode
    Error: near line 26: UNIQUE constraint failed: tree.inode
    Error: near line 27: UNIQUE constraint failed: tree.inode
    Error: near line 28: UNIQUE constraint failed: tree.inode
    Error: near line 29: UNIQUE constraint failed: tree.inode
    Error: near line 30: UNIQUE constraint failed: tree.inode
    Error: near line 31: UNIQUE constraint failed: tree.inode
    Error: near line 32: UNIQUE constraint failed: tree.inode
    Error: near line 33: UNIQUE constraint failed: tree.inode
    Error: near line 34: UNIQUE constraint failed: tree.inode
    Error: near line 35: UNIQUE constraint failed: tree.inode
     
    #6 Gigaveri, Jul 19, 2016
    Last edited: Jul 19, 2016
  7. t.lamprecht

    t.lamprecht Proxmox Staff Member
    Staff Member

    Joined:
    Jul 28, 2015
    Messages:
    909
    Likes Received:
    89
    Sqlite complains when overwriting files, sorry forgot that, just move the old db then do it again, I edited my post above, or here another time for sake of easiness:
    Code:
    cd /var/lib/pve-cluster/
    mv config.db config.db.old
    # note that the backup file may have another name, should begin with config- though.
    gunzip -c backup/config-1448650048.sql.gz | sqlite3 config.db
    
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    Gigaveri likes this.
  8. Gigaveri

    Gigaveri New Member

    Joined:
    Jul 14, 2016
    Messages:
    23
    Likes Received:
    0

    Thansk.

    2016-07-19_2322.png


    login failed:
     
  9. Gigaveri

    Gigaveri New Member

    Joined:
    Jul 14, 2016
    Messages:
    23
    Likes Received:
    0
    cluster not ready - no quorum?

    start vmm :(
     
  10. bizzarrone

    bizzarrone Member

    Joined:
    Nov 27, 2014
    Messages:
    31
    Likes Received:
    0
    Do not force to join to cluster!
    It will erase all the config on the node you are going to add!
    Create the cluster on node with running VMs. then join the one node empty, else.. you are going to power of all VM running and lost all of them!
    you could restore them and restore fresh data from thin-lvm mounting them manually. even if the lvm is in a dirty status.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice