Migrating to new server, reusing harddrives under new RAID controller.

nicoleise

New Member
Apr 25, 2023
8
0
1
Hi,

And sorry - I realise this question is asked frequently, but the threads I found have solutions that do not work for me or have different goals.

Now: Proxmox VE 7 on HPe ProLiant DL320e gen8 v2, 1proc/4cores/32Gb, 2xSSD 1,6 Tb HPe SFF MU disks, ZFS.

Future: Proxmox VE 7 on HPe ProLiant DL380 gen9, 2proc/32cores/64threads/256Gb, 2xSSD (from above), 6x1TB 7.200 rpm SFF HDDs, also ZFS. More storage to be added in further future.

The two servers use different RAID controllers. The new has a dedicated HPe RAID controller (P440, iirc) whereas the old has a integrated/software-ish solution (B120i, iirc). It seems disk arrays by these are not compatible.

Goal:
Either move existing VE + all VMs/LXCs to new server, fx by physically moving all disks,

or

Install Proxmoxin latest version on the new server and migrate all VMs/LXCs using backup/restore. (Is this allowed if source and target VE versions mismatch?)

Requirement: The two SSDs must be in a mirrored RAID1 hardware controlled array when finished, and must contain Proxmox and all VMs/LXC as today. The 6 HDDs are for storage purposes to be made available to one LXC. The storage must use ZFS when done, and must allow features like compression, etc.


So, this seems simple. Just move both SSDs to new server. According to many threads found here, that should work. It does not, likely because of RAID controller incompatibility. It simply does not detect the disks as bootable media, and the controller hides one of the disks from the OS as it contains SMART array configuration data.

I also tried just moving one disk. That won't work in the new server either. I have not tested if the old server will continue working with just one disk, although it should?

So now I'm wondering what best to do from here. I struggle to recognize which of these solutions will actually work, and obviously which will be the least involved for the best result.

For example, I'm struggling with grasping if using the hardware RAID controller is a good idea. According to tests it will vastly improve performance, and the write cache is battery/capacitor protected. But ZFS is meant to have direct access. I guess the controller would be in HBA mode and then all should be well, but I am not sure. I'm expecting all disks to be in some RAID or ZFS configuration that enables high read speeds (including compression), and network will be LACP 2x10Gb links. Backbone in the network is also 2 bonded 10G links, and selected clients are 10Gb links. So I expect read speeds to be the bottleneck and would like to optimise for this.

My own suggested solutions:

1) If the old server can keep going on just one disk, could I then clear the other disk, install it in the new server, setup a hardware controller controlled RAID1 mirror with only one member disk. Then do a clean install of the latest version of Proxmox, cluster the two servers and migrate over the VMs. Finally, I could add the second drive from the old server to the array as a member disk. This seems cumbersome but also the most correct approach? Will it work? Is it possible for example to later add a mirrored disk to an array like that? I don't know how the controller will know which disk contains the "true data" and which is meant to be overwritten?

2) Install Proxmox in the latest version on one of the 6 HDDs in the new server, then cluster, migrate over, then move both SSDs, arrange them in a RAID1 array as described above, and then move the entire content of the HDD to the SSD array (the SSDs are bigger). This seems super redundant, but may be needed if the old server cannot continue on one disk.

3) Move both SSDs over to the new server and take the daring leap of faith to clear SMART array configuration data from the disks. I didn't dare to do this such as to not risk any data loss, but I don't know if this is even a risk or not.

4) Other, much smarter approach, I didn't consider?

The server is not handling mission critical stuff, and while I'd prefer to avoid data loss and will obviously do backups prior, but downtime is ok.


Kind regards,
Nicolai
 
For example, I'm struggling with grasping if using the hardware RAID controller is a good idea. According to tests it will vastly improve performance, and the write cache is battery/capacitor protected. But ZFS is meant to have direct access. I guess the controller would be in HBA mode and then all should be well, but I am not sure.
You've read the OpenZFS documentation why running ZFS on a raid controller isn't the best option?: https://openzfs.github.io/openzfs-d...uning/Hardware.html#hardware-raid-controllers
 
You've read the OpenZFS documentation why running ZFS on a raid controller isn't the best option?: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
Yes, although a while ago (when I set up the current/old server). I think, iirc, I set that server up with the disks being just disks and a mirror in ZFS, now that I really think about it. But a major contributor in that decision was the fact that the servers raid controller was a hybrid/semi software solution, and everything I read pointed to it not really providing much benefit.

The reason I am considering hardware RAID was that I read from several sources that it does still work fine with ZFS, and I probably mistook the section (far down) in this link about SATA controllers to conclude that hardware RAID was faster:

https://calomel.org/zfs_raid_speed_capacity.html

Reading it again, I see that they test a single drive, and thus that the speed difference they are suggesting is likely not related to raid or no raid but rather to the performance in having dedicated (and proper) controllers for disks.

If that understanding is correct, I guess I concur that a ZFS mirror is the better approach.

That still leaves me with essentially the same questions though; can the old server continue on just one disk, and if so, can I move the other disk to the new server to essentially migrate the environment that way?

Can I later add the second disk to a (virtual) mirror and have that understand which disk has the "true" data to duplicate?

And how if all the above hold true, how to I safely make the first disk I move bootable in the new server? I already tried to boot the new server from either one or both disks, and this simply does not happen. The raid controller disables/hides from the OS one disk and the other doesn't seem to contain bootable media.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!