Add a node to cluster dissapear all my CT

altomarketing

Renowned Member
Dec 26, 2012
32
0
71
I have 2 fresh new servers with proxmox and created a cluster and merged them. All fine there, no CT was created, just installed the proxmox

Then i add another server (prox105) to this cluster. This server already has several CT stopped but someting go wrong and all CT are disappeared after merging to the cluster and i can not login to web interface to this server.(yes by ssh)

Searched how to fix in forums but I dont want to touch anymore to avoid lose any data. Please wait your advice. I just want to recover my data, how to restore my CTs ?

I paste here results:
pvcm status
Code:
pvecm status
Quorum information
------------------
Date:             Sat Aug 20 23:56:58 2016
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000003
Ring ID:          1/32
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 190.7.18.10
0x00000002          1 190.7.18.20
0x00000003          1 190.7.18.30 (local)

ls -lR /etc/pve
Code:
/etc/pve:
total 4
-rw-r----- 1 root www-data  451 Aug 17 10:15 authkey.pub
-rw-r----- 1 root www-data  544 Aug 20 22:26 corosync.conf
-rw-r----- 1 root www-data   16 Aug 17 10:08 datacenter.cfg
lrwxr-xr-x 1 root www-data    0 Dec 31  1969 local -> nodes/prox105
lrwxr-xr-x 1 root www-data    0 Dec 31  1969 lxc -> nodes/prox105/lxc
drwxr-xr-x 2 root www-data    0 Aug 17 10:15 nodes
lrwxr-xr-x 1 root www-data    0 Dec 31  1969 openvz -> nodes/prox105/openvz
drwx------ 2 root www-data    0 Aug 17 10:15 priv
-rw-r----- 1 root www-data 2041 Aug 17 10:16 pve-root-ca.pem
-rw-r----- 1 root www-data 1675 Aug 17 10:15 pve-www.key
lrwxr-xr-x 1 root www-data    0 Dec 31  1969 qemu-server -> nodes/prox105/qemu-server
-rw-r----- 1 root www-data  127 Aug 17 10:08 storage.cfg
-rw-r----- 1 root www-data   46 Aug 17 10:08 user.cfg
-rw-r----- 1 root www-data  119 Aug 17 10:16 vzdump.cron

/etc/pve/nodes:
total 0
drwxr-xr-x 2 root www-data 0 Aug 18 15:19 amor
drwxr-xr-x 2 root www-data 0 Aug 17 10:15 hastinapura
drwxr-xr-x 2 root www-data 0 Aug 20 22:26 prox105

/etc/pve/nodes/amor:
total 2
-rw-r----- 1 root www-data   83 Aug 20 23:58 lrm_status
drwxr-xr-x 2 root www-data    0 Aug 18 15:19 lxc
drwxr-xr-x 2 root www-data    0 Aug 18 15:19 openvz
drwx------ 2 root www-data    0 Aug 18 15:19 priv
-rw-r----- 1 root www-data 1679 Aug 18 15:19 pve-ssl.key
-rw-r----- 1 root www-data 1712 Aug 18 15:19 pve-ssl.pem
drwxr-xr-x 2 root www-data    0 Aug 18 15:19 qemu-server

/etc/pve/nodes/amor/lxc:
total 0

/etc/pve/nodes/amor/openvz:
total 0

/etc/pve/nodes/amor/priv:
total 0

/etc/pve/nodes/amor/qemu-server:
total 0

/etc/pve/nodes/hastinapura:
total 2
-rw-r----- 1 root www-data   83 Aug 20 23:58 lrm_status
drwxr-xr-x 2 root www-data    0 Aug 17 10:15 lxc
drwxr-xr-x 2 root www-data    0 Aug 17 10:15 openvz
drwx------ 2 root www-data    0 Aug 17 10:15 priv
-rw-r----- 1 root www-data 1675 Aug 17 10:15 pve-ssl.key
-rw-r----- 1 root www-data 1744 Aug 17 10:16 pve-ssl.pem
drwxr-xr-x 2 root www-data    0 Aug 17 10:15 qemu-server

/etc/pve/nodes/hastinapura/lxc:
total 0

/etc/pve/nodes/hastinapura/openvz:
total 0

/etc/pve/nodes/hastinapura/priv:
total 0

/etc/pve/nodes/hastinapura/qemu-server:
total 0

/etc/pve/nodes/prox105:
total 2
-rw-r----- 1 root www-data   83 Aug 20 23:58 lrm_status
drwxr-xr-x 2 root www-data    0 Aug 20 22:26 lxc
drwxr-xr-x 2 root www-data    0 Aug 20 22:26 openvz
drwx------ 2 root www-data    0 Aug 20 22:26 priv
-rw-r----- 1 root www-data 1679 Aug 20 22:36 pve-ssl.key
-rw-r----- 1 root www-data 1724 Aug 20 22:36 pve-ssl.pem
drwxr-xr-x 2 root www-data    0 Aug 20 22:26 qemu-server

/etc/pve/nodes/prox105/lxc:
total 0

/etc/pve/nodes/prox105/openvz:
total 0

/etc/pve/nodes/prox105/priv:
total 0

/etc/pve/nodes/prox105/qemu-server:
total 0

/etc/pve/priv:
total 3
-rw------- 1 root www-data 1679 Aug 17 10:15 authkey.key
-rw------- 1 root www-data 1184 Aug 20 23:07 authorized_keys
-rw------- 1 root www-data 2652 Aug 20 23:07 known_hosts
-rw------- 1 root www-data 3243 Aug 17 10:16 pve-root-ca.key
-rw------- 1 root www-data    3 Aug 20 22:36 pve-root-ca.srl

service pve-cluster status
Code:
pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled)
   Active: active (running) since Sat 2016-08-20 23:07:36 ART; 1h 0min ago
  Process: 25346 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
  Process: 25341 ExecStart=/usr/bin/pmxcfs $DAEMON_OPTS (code=exited, status=0/SUCCESS)
Main PID: 25344 (pmxcfs)
   CGroup: /system.slice/pve-cluster.service
           └─25344 /usr/bin/pmxcfs

Aug 20 23:07:35 prox105 pmxcfs[25344]: [dcdb] notice: waiting for updates from leader
Aug 20 23:07:35 prox105 pmxcfs[25344]: [dcdb] notice: update complete - trying to commit (got 3 inode updates)
Aug 20 23:07:35 prox105 pmxcfs[25344]: [dcdb] notice: all data is up to date
Aug 20 23:07:35 prox105 pmxcfs[25344]: [status] notice: received all states
Aug 20 23:07:35 prox105 pmxcfs[25344]: [status] notice: all data is up to date
Aug 20 23:14:34 prox105 pmxcfs[25344]: [status] notice: received log
Aug 20 23:29:30 prox105 pmxcfs[25344]: [status] notice: received log
Aug 20 23:34:05 prox105 pmxcfs[25344]: [dcdb] notice: data verification successful
Aug 20 23:44:29 prox105 pmxcfs[25344]: [status] notice: received log
Aug 20 23:59:29 prox105 pmxcfs[25344]: [status] notice: received log

service pveproxy status
Code:
pveproxy.service - PVE API Proxy Server
   Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled)
   Active: active (running) since Sat 2016-08-20 23:05:28 ART; 1h 4min ago
  Process: 25159 ExecStop=/usr/bin/pveproxy stop (code=exited, status=0/SUCCESS)
  Process: 3504 ExecReload=/usr/bin/pveproxy restart (code=exited, status=0/SUCCESS)
  Process: 25162 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
Main PID: 25166 (pveproxy)
   CGroup: /system.slice/pveproxy.service
           ├─25166 pveproxy
           ├─25167 pveproxy worker
           ├─25168 pveproxy worker
           └─25169 pveproxy worker

Aug 20 23:47:21 prox105 pveproxy[25168]: problem with client 186.18.150.107; rsa_eay_public_decrypt: data too large for modulus
Aug 20 23:47:21 prox105 pveproxy[25168]: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.
Aug 20 23:47:22 prox105 pveproxy[25167]: problem with client 186.18.150.107; rsa_eay_public_decrypt: data too large for modulus
Aug 20 23:47:22 prox105 pveproxy[25167]: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.
Aug 20 23:47:22 prox105 pveproxy[25169]: problem with client 186.18.150.107; rsa_eay_public_decrypt: data too large for modulus
Aug 20 23:47:22 prox105 pveproxy[25169]: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.
Aug 20 23:47:23 prox105 pveproxy[25167]: problem with client 186.18.150.107; rsa_eay_public_decrypt: data too large for modulus
Aug 20 23:47:23 prox105 pveproxy[25167]: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.
Aug 20 23:47:24 prox105 pveproxy[25169]: problem with client 186.18.150.107; rsa_eay_public_decrypt: data too large for modulus
Aug 20 23:47:24 prox105 pveproxy[25169]: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.

This are my CT = ls -la /var/lib/vz/images/
Code:
total 56
drwxr-xr-x 14 root root 4096 May 26 15:22 .
drwxr-xr-x  8 root root 4096 Dec 11  2015 ..
drwxr-----  2 root root 4096 Dec 14  2015 100
drwxr-----  2 root root 4096 Dec 22  2015 101
drwxr-----  2 root root 4096 Dec 22  2015 102
drwxr-----  2 root root 4096 Dec 30  2015 103
drwxr-----  2 root root 4096 May 30 18:05 104
drwxr-----  2 root root 4096 Jul  9 16:20 105
drwxr-----  2 root root 4096 Jul 27 17:07 106
drwxr-----  2 root root 4096 May 30 17:53 107
drwxr-----  2 root root 4096 May 19 18:08 108
drwxr-----  2 root root 4096 May 30 17:53 109
drwxr-----  2 root root 4096 May 30 17:53 110
drwxr-----  2 root root 4096 May 26 15:22 111
 
The pvecm tool does not allow to add nodes with existing VMs, because existing configuration will be overwritten. I guess you did a --force add? Your VM images are still in /var/lib/vz/images/, but you lost the configuration.
 
Yes, how i recover my VM then ? should i fix configuration or i should go in the way to restore the images as new ?

That is up to you. Recreating config files manually is quite easy if you remember the settings.
 
# pvesm list <STOREID> --vmid <VMID>

pvesm list local
Code:
local:100/vm-100-disk-1.raw                           raw 107374182400 100
local:101/vm-101-disk-1.raw                           raw 21474836480 101
local:102/vm-102-disk-1.raw                           raw 32212254720 102
local:103/vm-103-disk-1.raw                           raw 32212254720 103
local:104/vm-104-disk-1.raw                           raw 536870912000 104
local:105/vm-105-disk-1.raw                           raw 536870912000 105
local:106/vm-106-disk-1.raw                           raw 1073741824000 106
local:108/vm-108-disk-1.raw                           raw 2147483648000 108
local:111/vm-111-disk-1.raw                           raw 1073741824000 111
local:vztmpl/centos-6-default_20160205_amd64.tar.xz   txz   54699788
local:vztmpl/centos-7-default_20160205_amd64.tar.xz   txz   65985020

Recreating config files manually is quite easy if you remember the settings.
Honestly i dont remember it.

Will this command works with raw images ?
pct restore <VMID> local:100/vm-100-disk-1.raw

If not, how can i recreate from the raw image ?
 
Please create a new container, so that you get an idea how config files looks like. The use cp/vi to create them manually.
 
Please create a new container

I can not create a new container, error

Code:
TASK ERROR: lvcreate 'pve/vm-200-disk-1' error: Logical volume pve/data is not a thin pool.

ee4f3b5218.png


ee54d4b865.png

ee54de07de.png
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!