Move Disks with Proxmox to New Server

nicoleise

New Member
Apr 25, 2023
8
0
1
Hi everyone,


Migrating to new server hardware, but would like to keep using the same disks for Proxmox (expensive enterprise SSDs). I'll refer to the disks as A and B.

If I move Disks A and B to the target server, it fails to boot. The Raid controller in the new server shows that one disk is normal and the other "contains SMART configuration data and has been hidden from the OS". I have the option to clear this data, but unsure what that entails.

If I move only disk A, the server does not boot, not messages in Raid controller or elsewhere.

If I move only disk B, the server does not boot, and the same message as above is shown in the Raid controller.

Reversing the order of the disks does nothing.

Reinstalling the disks back into the previous server allows this server to boot into Proxmox again and reports no errors.

I'm at a bit of a loss as to how to proceed?

Can I safely clear the SMART data without loosing the data on the disk? Or do I need to do some intermediate step, like a fresh install of Proxmox on an extra HDD, cluster with existing server, migrate VMs and the shut down old server and install Proxmox again on the target disks installed into the new server?

Ad finally, if zpool displays these disks as a mirror, can Proxmox simply boot in the old server from just one of these disks? If so, I could move the other disk, install Proxmox to it and move stuff over, then add the second disk to the mirror in the target server?
 
Sorry, I have not experience with hardware RAID but I do think each controller (or firmware version) can be different and not use drives from others.
Ad finally, if zpool displays these disks as a mirror, can Proxmox simply boot in the old server from just one of these disks? If so, I could move the other disk, install Proxmox to it and move stuff over, then add the second disk to the mirror in the target server?
That's much like the way I install new major Proxmox versions. Install Proxmox in a VM with ZFS, once configured I add a USB drive (with ESP partitions) as a mirror. Then I boot the physical system from the USB drive, setup partitions on the physical drives and make a 3-way mirror with the drives. Once the system boots correctly, I can remove the USB drive (from the mirror).
Note that both system must either both boot with UEFI or both with legacy BIOS, otherwise the new one won't boot.
 
  • Like
Reactions: nicoleise
seems old server run disks in hardware raid.
you cannot swap if raid controller is different from new
I'll double-check tomorrow, but I believe the disks are not setup in a raid array, but rather presented to Proxmox as individual disks.

When I look in Proxmox I have sda and sdb in a mirror. The sizes reported equal the physical disks (1.6 TB each). Wouldn't this indicate that no (hardware) raid is involved?

Back when I set up the original server, I believe I wanted to use hardware raid, but i) found that the controller in that server is not a true hardware raid controller but some software hybrid solution and ii) learned about precisely this scenario - that I wouldn't be able to actually use the raid for redundancy if fx the controller broke, without replacing it with an identical or compatible controller.
 
When I look in Proxmox I have sda and sdb in a mirror. The sizes reported equal the physical disks (1.6 TB each). Wouldn't this indicate that no (hardware) raid is involved?
Two Raid0 of a single disk or JBOD would look the same when HW raid would be used.

Can you set the raid controller of the new server to HBA mode?
 
Two Raid0 of a single disk or JBOD would look the same when HW raid would be used.

Can you set the raid controller of the new server to HBA mode?


The Raid controller in the new server was (during the above described) and is in HBA mode.
 
Last edited:
I've attached the Disk information shown in Proxmox (7.1-7 btw).

Physically, the (old) server contains 2 x 1.6 TB enterprise SSDs. Disks menu seems to report two unique disks of this size (serial numbers differ). ZFS/rpool/details seems to suggest that these are configured as a virtual mirror in Proxmox, again with unique serials and with an rpool size of 1.6 TB, equivelant to one disk.

I take that to mean the disks are exposed individually to Proxmox, which is then configured to mirror ("vRAID1") them in software?

If so, I really don't understand why these disks don't just boot in the new server. I'd love a pointer on the best procedure going forward?
 

Attachments

  • Old-ProxmoxDisks.png
    Old-ProxmoxDisks.png
    55 KB · Views: 17
  • Old-ProxmoxZFS.png
    Old-ProxmoxZFS.png
    52.6 KB · Views: 17
Note that both system must either both boot with UEFI or both with legacy BIOS, otherwise the new one won't boot.
And you already checked disabling/enabling CSM in BIOS on the new server in case both server use different UEFI/BIOS?
 
And you already checked disabling/enabling CSM in BIOS on the new server in case both server use different UEFI/BIOS?

Ugh, I missed that note at the end of leestekens post. I knew that the new server was UEFI, and that I would like it to be. But checking the old server, and it seems that is in legacy mode:

Bash:
root@pve:~# efibootmgr
EFI variables are not supported on this system.

So obviously, there's the culprit. How best to proceed though? My take would be this, but I don't know if that makes sense:

  1. Power down VMs, LXCs, Host on old server
  2. Remove disk B from old server. Turn old server on again (will that work?)
  3. Install disk B into new server, format it, install Proxmox afresh.
  4. Setup disk B to be (the only) member of a ZFS mirror (non-hardware RAID1)
  5. Cluster new server with old.
  6. Migrate all VMs/LXCs
  7. Shutdown old server
  8. Install disk A into new server, format it and add it as member to the configured mirror.
  9. Decomission old server.
I'm unsure of the details in the above (fx if it's possible to create a ZFS mirror with only one disk during the install), since I haven't installed Proxmox in this way before.

I'm also unsure if I'm running in rings around the easy solution - for example if it's overall simpler to backup all VMs, format both disks, move both disks, install Proxmox afresh on those and restore from backups.

Oh, and my BIOS on the target server does not seem to have the CSM option if that matters.
 
LK1600GEYMV is the real model name from HP, if was behind the raid hw controller, model name were HP_LOGICAL_VOLUME.
Proxmox installer install ZFS in uefi with systemd-boot loader instead grub.
Legacy to UEFI boot will not be straightforward.