Migration failed because ssh connection

Pablo Alcaraz

Member
Jul 6, 2017
53
8
13
55
Hello

I have a problem migrating VM between 2 servers using ProxMox VE 5.1

I have a cluster of 2 nodes. I want to migrate a VM from one node (called pve3) to the other (called pve5).
I use the UI, right click, option migrate and I got this message:

Task viewer: VM 109 - Migrate (pve3 ---> pve5)
OutputStatus

2017-10-31 18:40:33 # /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=pve5' root@172.17.255.14 /bin/true
2017-10-31 18:40:33 Host key verification failed.
2017-10-31 18:40:33 ERROR: migration aborted (duration 00:00:00): Can't connect to destination address using public key
TASK ERROR: migration aborted​

I executed the ssh line with verbose option and I got this:

root@pve3:~# /usr/bin/ssh -v -o 'BatchMode=yes' -o 'HostKeyAlias=pve5' root@172.17.255.14 /bin/true
OpenSSH_7.4p1 Debian-10+deb9u1, OpenSSL 1.0.2l 25 May 2017
debug1: Reading configuration data /root/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 172.17.255.14 [172.17.255.14] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4p1 Debian-10+deb9u1
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4p1 Debian-10+deb9u1
debug1: match: OpenSSH_7.4p1 Debian-10+deb9u1 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 172.17.255.14:22 as 'root'
debug1: using hostkeyalias: pve5
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: aes128-ctr MAC: umac-64-etm@openssh.com compression: none
debug1: kex: client->server cipher: aes128-ctr MAC: umac-64-etm@openssh.com compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:EuFxF++xABIsHgQJU6kpgUJDzL+C8RXXjHHG/BBe6t4
debug1: using hostkeyalias: pve5
Host key verification failed.​

I forced this node into the cluster and I had all the kind of problems with some ssh keyalias.

It happened because I have to remove and re join pve5 host to the cluster (a HDD with the OS partition broke). I replace a HDD and reinstalled it using the same name and IP.

I used the option described in https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node and https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_separate_node_without_reinstall

However, it seems that pve3 kept some SSH key somewhere and I cannot find it and remove it.

Now it interferes with the operations on the reinstalled proxmox.

How could this be fixed? I tried removing and rejoining again the node, but it did not work. Should I delete the old pve5 key on pve3 server and re join pve3 host again? Where is this key?
 
Since my cluster is composed by 2 nodes, I could delete the cluster and recreate it from scratch in case of need. Will it work?
 
Install pve5 from scratch, with different IP and hostname and add it to the cluster.

I think I will do that.

Still, reinstalling a node is kind of normal if you have lot of hosts in a proxmox cluster. It is a punishment to have to rename a host and change its IP just because the cluster refuses to re accept the host with a different SSH certificate.

Is there a way to solve it? How could a reinstalled host (OS HDD formatted and proxmox reinstalled) be re accepted in a cluster when it has new SSH credentials?
 
I reinstalled the host with a new name and a new address. Now I get other error in the host:

First I created and it got locked in:

"waiting for quorum...Connection to pve6 closed by remote host."

so I rebooted the host and retried:

root@pve6:~# pvecm add 172.17.255.4 -f
can't create shared ssh key database '/etc/pve/priv/authorized_keys'
trying to aquire cfs lock 'file-corosync_conf' ... OK
node pve6 already defined
copy corosync auth key
stopping pve-cluster service
backup old database
Job for corosync.service failed because the control process exited with error code.
See "systemctl status corosync.service" and "journalctl -xe" for details.
waiting for quorum...Connection to pve6 closed by remote host.
Connection to pve6 closed.​


systemctl status corosync.service output is:

root@pve6:~# systemctl status corosync.service
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2017-11-01 15:28:15 PDT; 8min ago
Docs: man:corosync
man:corosync.conf
man:corosync_overview
Process: 1499 ExecStart=/usr/sbin/corosync -f $COROSYNC_OPTIONS (code=exited, status=20)
Main PID: 1499 (code=exited, status=20)

Nov 01 15:28:15 pve6 corosync[1499]: info [WD ] no resources configured.
Nov 01 15:28:15 pve6 corosync[1499]: notice [SERV ] Service engine loaded: corosync watchdog service [7]
Nov 01 15:28:15 pve6 corosync[1499]: notice [QUORUM] Using quorum provider corosync_votequorum
Nov 01 15:28:15 pve6 corosync[1499]: crit [QUORUM] Quorum provider: corosync_votequorum failed to initialize.
Nov 01 15:28:15 pve6 corosync[1499]: error [SERV ] Service engine 'corosync_quorum' failed to load for reason 'configuration error: nodelist or quorum.expected_votes must be configured!'
Nov 01 15:28:15 pve6 corosync[1499]: error [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356.
Nov 01 15:28:15 pve6 systemd[1]: corosync.service: Main process exited, code=exited, status=20/n/a
Nov 01 15:28:15 pve6 systemd[1]: Failed to start Corosync Cluster Engine.
Nov 01 15:28:15 pve6 systemd[1]: corosync.service: Unit entered failed state.
Nov 01 15:28:15 pve6 systemd[1]: corosync.service: Failed with result 'exit-code'.
root@pve6:~#​



journalctl -xe output is:

root@pve6:~# journalctl -xe
Nov 01 15:38:14 pve6 pmxcfs[1488]: [dcdb] crit: cpg_initialize failed: 2
Nov 01 15:38:14 pve6 pmxcfs[1488]: [status] crit: cpg_initialize failed: 2
Nov 01 15:38:15 pve6 pveproxy[2306]: worker exit
Nov 01 15:38:15 pve6 pveproxy[1179]: worker 2306 finished
Nov 01 15:38:15 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:15 pve6 pveproxy[1179]: worker 2309 started
Nov 01 15:38:15 pve6 pveproxy[2307]: worker exit
Nov 01 15:38:15 pve6 pveproxy[2309]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:15 pve6 pveproxy[1179]: worker 2307 finished
Nov 01 15:38:15 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:15 pve6 pveproxy[1179]: worker 2310 started
Nov 01 15:38:15 pve6 pveproxy[2310]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:15 pve6 pveproxy[2308]: worker exit
Nov 01 15:38:15 pve6 pveproxy[1179]: worker 2308 finished
Nov 01 15:38:15 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:15 pve6 pveproxy[1179]: worker 2311 started
Nov 01 15:38:15 pve6 pveproxy[2311]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:20 pve6 pveproxy[2309]: worker exit
Nov 01 15:38:20 pve6 pveproxy[1179]: worker 2309 finished
Nov 01 15:38:20 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:20 pve6 pveproxy[1179]: worker 2319 started
Nov 01 15:38:20 pve6 pveproxy[2310]: worker exit
Nov 01 15:38:20 pve6 pveproxy[2319]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:20 pve6 pveproxy[1179]: worker 2310 finished
Nov 01 15:38:20 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:20 pve6 pveproxy[1179]: worker 2320 started
Nov 01 15:38:20 pve6 pveproxy[2320]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:20 pve6 pveproxy[2311]: worker exit
Nov 01 15:38:20 pve6 pveproxy[1179]: worker 2311 finished
Nov 01 15:38:20 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:20 pve6 pveproxy[1179]: worker 2321 started
Nov 01 15:38:20 pve6 pveproxy[2321]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:20 pve6 pmxcfs[1488]: [quorum] crit: quorum_initialize failed: 2
Nov 01 15:38:20 pve6 pmxcfs[1488]: [confdb] crit: cmap_initialize failed: 2
Nov 01 15:38:20 pve6 pmxcfs[1488]: [dcdb] crit: cpg_initialize failed: 2
Nov 01 15:38:20 pve6 pmxcfs[1488]: [status] crit: cpg_initialize failed: 2
Nov 01 15:38:25 pve6 pveproxy[2319]: worker exit
Nov 01 15:38:25 pve6 pveproxy[1179]: worker 2319 finished
Nov 01 15:38:25 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:25 pve6 pveproxy[1179]: worker 2322 started
Nov 01 15:38:25 pve6 pveproxy[2320]: worker exit
Nov 01 15:38:25 pve6 pveproxy[2322]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:25 pve6 pveproxy[1179]: worker 2320 finished
Nov 01 15:38:25 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:25 pve6 pveproxy[1179]: worker 2323 started
Nov 01 15:38:25 pve6 pveproxy[2323]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:25 pve6 pveproxy[2321]: worker exit
Nov 01 15:38:25 pve6 pveproxy[1179]: worker 2321 finished
Nov 01 15:38:25 pve6 pveproxy[1179]: starting 1 worker(s)
Nov 01 15:38:25 pve6 pveproxy[1179]: worker 2324 started
Nov 01 15:38:25 pve6 pveproxy[2324]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1626.
Nov 01 15:38:26 pve6 pmxcfs[1488]: [quorum] crit: quorum_initialize failed: 2
Nov 01 15:38:26 pve6 pmxcfs[1488]: [confdb] crit: cmap_initialize failed: 2
Nov 01 15:38:26 pve6 pmxcfs[1488]: [dcdb] crit: cpg_initialize failed: 2
Nov 01 15:38:26 pve6 pmxcfs[1488]: [status] crit: cpg_initialize failed: 2
lines 1007-1061/1061 (END)​



I understand that we did not pay support. We are evaluating proxmox and about to do a decision between Proxmox VE or continue with VMWare ESXi 6.

I love Proxmox but I cannot make a recommendation if I cannot join a node to a cluster by creating a cluster of 2 machines. I know there is a price difference, but the fact is that VMWare works and Proxmox does not work for us and I hate to admit I invested so much time for nothing.

Please some help.
 
The host is not working now. I go to https://pve6:8006 and it does not work.

Here are the outputs of some services. Several are broken.


root@pve6:~# systemctl status pve
pvebanner.service pve-firewall.service pve-ha-crm.service pvenetcommit.service pvesr.timer
pve-cluster.service pvefw-logger.service pve-ha-lrm.service pveproxy.service pvestatd.service
pvedaemon.service pve-guests.service pve-manager.service pvesr.service pve-storage.target
root@pve6:~# systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2017-11-01 15:47:51 PDT; 5min ago
Process: 1101 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS)
Process: 1090 ExecStart=/usr/bin/pmxcfs $DAEMON_OPTS (code=exited, status=0/SUCCESS)
Main PID: 1099 (pmxcfs)
Tasks: 6 (limit: 4915)
CGroup: /system.slice/pve-cluster.service
└─1099 /usr/bin/pmxcfs

Nov 01 15:53:20 pve6 pmxcfs[1099]: [dcdb] crit: cpg_initialize failed: 2
Nov 01 15:53:20 pve6 pmxcfs[1099]: [status] crit: cpg_initialize failed: 2
Nov 01 15:53:26 pve6 pmxcfs[1099]: [quorum] crit: quorum_initialize failed: 2
Nov 01 15:53:26 pve6 pmxcfs[1099]: [confdb] crit: cmap_initialize failed: 2
Nov 01 15:53:26 pve6 pmxcfs[1099]: [dcdb] crit: cpg_initialize failed: 2
Nov 01 15:53:26 pve6 pmxcfs[1099]: [status] crit: cpg_initialize failed: 2
Nov 01 15:53:32 pve6 pmxcfs[1099]: [quorum] crit: quorum_initialize failed: 2
Nov 01 15:53:32 pve6 pmxcfs[1099]: [confdb] crit: cmap_initialize failed: 2
Nov 01 15:53:32 pve6 pmxcfs[1099]: [dcdb] crit: cpg_initialize failed: 2
Nov 01 15:53:32 pve6 pmxcfs[1099]: [status] crit: cpg_initialize failed: 2
root@pve6:~# pvecm status
Cannot initialize CMAP service
root@pve6:~# systemctl status pve-
pve-cluster.service pve-firewall.service pve-guests.service pve-ha-crm.service pve-ha-lrm.service pve-manager.service pve-storage.target
root@pve6:~# systemctl status pve-manager
● pve-guests.service - PVE guests
Loaded: loaded (/lib/systemd/system/pve-guests.service; enabled; vendor preset: enabled)
Active: activating (start) since Wed 2017-11-01 15:47:53 PDT; 7min ago
Main PID: 1149 (pvesh)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/pve-guests.service
├─1149 /usr/bin/perl /usr/bin/pvesh --nooutput create /nodes/localhost/startall
└─1152 task UPID:pve6:00000480:000005B2:59FA4F19:startall::root@pam:

Nov 01 15:47:53 pve6 systemd[1]: Starting PVE guests...
Nov 01 15:47:53 pve6 pve-guests[1149]: <root@pam> starting task UPID:pve6:00000480:000005B2:59FA4F19:startall::root@pam:
root@pve6:~# systemctl status pve-storage
Unit pve-storage.service could not be found.
root@pve6:~# systemctl status pve-guests
● pve-guests.service - PVE guests
Loaded: loaded (/lib/systemd/system/pve-guests.service; enabled; vendor preset: enabled)
Active: activating (start) since Wed 2017-11-01 15:47:53 PDT; 7min ago
Main PID: 1149 (pvesh)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/pve-guests.service
├─1149 /usr/bin/perl /usr/bin/pvesh --nooutput create /nodes/localhost/startall
└─1152 task UPID:pve6:00000480:000005B2:59FA4F19:startall::root@pam:

Nov 01 15:47:53 pve6 systemd[1]: Starting PVE guests...
Nov 01 15:47:53 pve6 pve-guests[1149]: <root@pam> starting task UPID:pve6:00000480:000005B2:59FA4F19:startall::root@pam:​
 
here are the versions of both machines (I checked all is the same):

proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-25
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: not correctly installed
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
 
I found the problem. I think there is a bug in the command pvecm

I deleted the cluster. But the cluster was not the problem. The problem happens when you use pvecm add or create in Proxmox 5.1. For example, if you execute:

pvecm add IP-ADDRESS-CLUSTER

this command invoke a ssh-copy-id without parameter -i (same pvecm create). Or they invoke something that invoke ssh-copy-id without -i.

As you know, ssh-copy-id by default (without -i) invokes ssh-add -L that (from the man page) "... Lists public key parameters of all identities currently represented by the agent...".

So, IF you open a console using your GUI AND you login to the node using ssh AND use the ssh agent AND the command "ssh-add -L" lists some certificate of yours, THEN pvecm command will copy all YOUR public keys cached in the agent in Proxmox VE cluster configuration making them available urbi et orbi. This happens when you create the cluster too.

It has several ANDs but it will happen with each Linux user normally fancy enough to use a GUI and perhaps with Mac users too.

That is a horrible leak of your/my certificates, but until now, there is not error.

There is not error, until you add a 2nd node to the cluster. In this case, the certificates leaked by pvecm command are listed again so the command will try to re-store them in the cluster again, but they are already there, so you got a nice:

unable to copy ssh ID: exit code 1

and after that you never ever will be able to add a node to the cluster again.


The workaround is to use the browser console to execute any command in PVE hosts, or type commands in a text console opened physically in the host or remove your cached certificates from ssh agent (unacceptable, because if you have them there is for a good reason).
Still, those workarounds are not good enough because console browser could be available or not and perhaps the host is physically unavailable.

So this is a bug and this is a pub ssh certificate leak of the command pvecm.

PS: Please add:

pvecm remove node
pvecm destroy cluster
 
pvecm command will copy all YOUR public keys cached in the agent in Proxmox VE cluster configuration making them available urbi et orbi.
Public keys are public (known by everyone) by design. How can public information leak?

Please show 'pvecm status' from node pve3.
 
Also, please post all three hosts files specifying on which host they are from.
 
I did not imply PVE is leaking public information in a derogatory way. But the fact that I could not add a node because those public keys.

I reinstalled pve5 and renamed to pve6 and after I was unabled to add it as pve6, I reinstalled PVE in that node and renamed pve7 with new IP.

In pve3, after I was unable to add the node as pve6, I deleted the cluster using a variant of the procedure described in https://forum.proxmox.com/threads/removing-deleting-a-created-cluster.18887/#post-154815 .

I recreated a the cluster in pve3 and when I tried to add (again) pve7 from my workstation (with ssh agent caching my public keys) I got the error:

root@pve7:~# pvecm add 172.17.255.4
unable to copy ssh ID: exit code 1​

it did not happen when I executed the same command from the text console (without public keys cached in my ssh agent or perhaps the ssh agent not running. It was a classic text console in the server without GUI).

So either the problem happened because my public keys were present in pve3 (before I deleted the cluster) or because some interaction with ssh agent.

Here is the output of 'pvecm status'. This status is from the new cluster I re created after wipe out everything previously there.

root@pve3:~# pvecm status
Quorum information
------------------
Date: Thu Nov 2 10:43:39 2017
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1/8
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 172.17.255.4 (local)
0x00000002 1 172.17.255.16​
 
root@pve3:~# cat /etc/hosts
127.0.0.1 localhost
# 127.0.1.1 esx3.ikuni.com esx3
172.17.255.4 pve3.ikuni.com pve3 pvelocalhost

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@pve3:~#


root@pve7:~# cat /etc/hosts
127.0.0.1 localhost
# 127.0.1.1 pve7.ikuni.com pve7
172.17.255.16 pve7.ikuni.com pve7 pvelocalhost

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters



but this is now with the new cluster that works fine.

I have some files from the previous cluster:

.
├── etc
│ ├── corosync
│ │ ├── authkey
│ │ ├── corosync.conf
│ │ └── uidgid.d
│ └── pve
│ ├── nodes
│ └── priv
│ └── authorized_keys
└── var
└── lib
├── corosync
│ └── ringid_172.17.255.4
└── pve-cluster
└── config.db

and
/etc/ssh/ssh_known_hosts
/root/.ssh/known_hosts

let me know if you want them. Still I believe it is some bad interaction of pvecm with ssh agent.
 
Last edited:
Do you still have a problem now that you re-created the cluster?

It will be practically impossible to troubleshoot something that no longer exists.
 
  • Like
Reactions: Pablo Alcaraz
Do you still have a problem now that you re-created the cluster?

It will be practically impossible to troubleshoot something that no longer exists.

No. After I recreated the cluster all my problems are gone. I am finishing a 3rd host configuration and I will add it to the cluster next week. I will try using the same procedure I used the first time with the old cluster because I still suspect there ssh agent is messing pvecm commands. I ll let you know the result. In the meantime I say we stop because I do not have logs or all the cfg files from the old cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!