Migration off a very old Proxmox server

joschtl

New Member
Mar 22, 2023
3
0
1
I need to move a few VMs off a server that was not maintained for a very long time.
I figured I'd place a new physical server next to the old machine, install a fresh Proxmox on it, create a cluster and migrate the VMs - so that's what I started.

Does this make sense, or is there a better approach?

Now, I've got a shiny new server eager to start working on which I have configured a cluster, but I can't join it from the old machine. The old one is running Proxmox Virtual Environment 4.2-2/725d76f0 (yes, I know – it's not my fault, I'm trying to fix it), and I can't find an option to join an existing cluster. The relevant view seems to be "Summary" on the Datacenter, where it states "Standalone node - no cluster defined" – and that's it.
Maybe the problem is found when I enter Node -> Services, where corosync is listed as dead, and starting it keeps it dead, and trying to find its status on the console yields:

Code:
* corosync.service - Corosync Cluster Engine
  Loaded: loaded (/lib/systemd/system/corosync.service; enabled)
  Active: inactive (dead)
              start condition failed at Wed 2023-03-22 14:13:36 CET; 8s ago
             ConditionPathExists=/etc/corosync/corosync.conf was not met

The path /etc/corosync on the new server seems to contain useful information which is even almost understandable – but not enough to copy it over and fill in correct information, I'm afraid.

/etc/corosync/corosync.conf on the new machine reads:

Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: NewServerName
    nodeid: 1
    quorum_votes: 1
    ring0_addr: ***.***.***.***
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: MyClusterName
  config_version: 1
  interface {
    linknumber: 0
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}

and there's a binary file called authkey.

Can I just copy these over, and fill in a section for the old server like

Code:
node {
    name: OldServerName
    nodeid: 2
    quorum_votes: 2
    ring0_addr: ***.***.***.***
}

or is that approach just plain impossible?

Thanks for reading this far – I'd appreciate any input, even if it is "No, don't do that, take physical backups and never touch that old machine again", though I'd prefer a migration approach.
 
Last edited:
OK, found the answer myself. From https://pve.proxmox.com/wiki/Cluster_Manager:
Running a cluster of Proxmox VE 6.x with earlier versions is not possible. The cluster protocol (corosync) between Proxmox VE 6.x and earlier versions changed fundamentally. The corosync 3 packages for Proxmox VE 5.4 are only intended for the upgrade procedure to Proxmox VE 6.0.
So, it is physically moving USB disks for me. Ain't nobody have time waiting for slow internet.
 
OK, found the answer myself. From https://pve.proxmox.com/wiki/Cluster_Manager:

So, it is physically moving USB disks for me. Ain't nobody have time waiting for slow internet.
Hehe. Ideally, you have a network share that both can access. Then create a backup on the old node and restore it in the new one. That should work.

Alternatively, you can use an external disk. But you'll need to mount it yourself via the CLI :)
 
I was hoping to minimize downtime by writing the VM to an external disk, plugging it physically into the new server, and running the VM from the backup medium immediately. Migration to the internal disks on the new server should be possible while the VM is running, I suppose.
I'll first try this approach with a dummy VM, of course.
 
Ah you mean, on the old host: add an external disk as storage, move the VM disk over, then power the VM down, unmount the external disk, attach it to the new host as storage, add it to a new VM (or config copied from the old) and attach the disk there. Then start the VM and move the disk image to an actual local storage?

So that the downtime is basically just unmounting -> move external disk -> mount? That could work :)

What OS is running in the VM? It could be possible that you need to set the machine type to a version similar to the qemu version running on the old host if it doesn't detect all the hardware on the new host.
 
Last edited: