I need to move a few VMs off a server that was not maintained for a very long time.
I figured I'd place a new physical server next to the old machine, install a fresh Proxmox on it, create a cluster and migrate the VMs - so that's what I started.
Does this make sense, or is there a better approach?
Now, I've got a shiny new server eager to start working on which I have configured a cluster, but I can't join it from the old machine. The old one is running Proxmox Virtual Environment 4.2-2/725d76f0 (yes, I know – it's not my fault, I'm trying to fix it), and I can't find an option to join an existing cluster. The relevant view seems to be "Summary" on the Datacenter, where it states "Standalone node - no cluster defined" – and that's it.
Maybe the problem is found when I enter Node -> Services, where corosync is listed as dead, and starting it keeps it dead, and trying to find its status on the console yields:
The path /etc/corosync on the new server seems to contain useful information which is even almost understandable – but not enough to copy it over and fill in correct information, I'm afraid.
/etc/corosync/corosync.conf on the new machine reads:
and there's a binary file called authkey.
Can I just copy these over, and fill in a section for the old server like
or is that approach just plain impossible?
Thanks for reading this far – I'd appreciate any input, even if it is "No, don't do that, take physical backups and never touch that old machine again", though I'd prefer a migration approach.
I figured I'd place a new physical server next to the old machine, install a fresh Proxmox on it, create a cluster and migrate the VMs - so that's what I started.
Does this make sense, or is there a better approach?
Now, I've got a shiny new server eager to start working on which I have configured a cluster, but I can't join it from the old machine. The old one is running Proxmox Virtual Environment 4.2-2/725d76f0 (yes, I know – it's not my fault, I'm trying to fix it), and I can't find an option to join an existing cluster. The relevant view seems to be "Summary" on the Datacenter, where it states "Standalone node - no cluster defined" – and that's it.
Maybe the problem is found when I enter Node -> Services, where corosync is listed as dead, and starting it keeps it dead, and trying to find its status on the console yields:
Code:
* corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled)
Active: inactive (dead)
start condition failed at Wed 2023-03-22 14:13:36 CET; 8s ago
ConditionPathExists=/etc/corosync/corosync.conf was not met
The path /etc/corosync on the new server seems to contain useful information which is even almost understandable – but not enough to copy it over and fill in correct information, I'm afraid.
/etc/corosync/corosync.conf on the new machine reads:
Code:
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: NewServerName
nodeid: 1
quorum_votes: 1
ring0_addr: ***.***.***.***
}
}
quorum {
provider: corosync_votequorum
}
totem {
cluster_name: MyClusterName
config_version: 1
interface {
linknumber: 0
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}
and there's a binary file called authkey.
Can I just copy these over, and fill in a section for the old server like
Code:
node {
name: OldServerName
nodeid: 2
quorum_votes: 2
ring0_addr: ***.***.***.***
}
or is that approach just plain impossible?
Thanks for reading this far – I'd appreciate any input, even if it is "No, don't do that, take physical backups and never touch that old machine again", though I'd prefer a migration approach.
Last edited: