Hi,
And sorry - I realise this question is asked frequently, but the threads I found have solutions that do not work for me or have different goals.
Now: Proxmox VE 7 on HPe ProLiant DL320e gen8 v2, 1proc/4cores/32Gb, 2xSSD 1,6 Tb HPe SFF MU disks, ZFS.
Future: Proxmox VE 7 on HPe ProLiant DL380 gen9, 2proc/32cores/64threads/256Gb, 2xSSD (from above), 6x1TB 7.200 rpm SFF HDDs, also ZFS. More storage to be added in further future.
The two servers use different RAID controllers. The new has a dedicated HPe RAID controller (P440, iirc) whereas the old has a integrated/software-ish solution (B120i, iirc). It seems disk arrays by these are not compatible.
Goal:
Either move existing VE + all VMs/LXCs to new server, fx by physically moving all disks,
or
Install Proxmoxin latest version on the new server and migrate all VMs/LXCs using backup/restore. (Is this allowed if source and target VE versions mismatch?)
Requirement: The two SSDs must be in a mirrored RAID1 hardware controlled array when finished, and must contain Proxmox and all VMs/LXC as today. The 6 HDDs are for storage purposes to be made available to one LXC. The storage must use ZFS when done, and must allow features like compression, etc.
So, this seems simple. Just move both SSDs to new server. According to many threads found here, that should work. It does not, likely because of RAID controller incompatibility. It simply does not detect the disks as bootable media, and the controller hides one of the disks from the OS as it contains SMART array configuration data.
I also tried just moving one disk. That won't work in the new server either. I have not tested if the old server will continue working with just one disk, although it should?
So now I'm wondering what best to do from here. I struggle to recognize which of these solutions will actually work, and obviously which will be the least involved for the best result.
For example, I'm struggling with grasping if using the hardware RAID controller is a good idea. According to tests it will vastly improve performance, and the write cache is battery/capacitor protected. But ZFS is meant to have direct access. I guess the controller would be in HBA mode and then all should be well, but I am not sure. I'm expecting all disks to be in some RAID or ZFS configuration that enables high read speeds (including compression), and network will be LACP 2x10Gb links. Backbone in the network is also 2 bonded 10G links, and selected clients are 10Gb links. So I expect read speeds to be the bottleneck and would like to optimise for this.
My own suggested solutions:
1) If the old server can keep going on just one disk, could I then clear the other disk, install it in the new server, setup a hardware controller controlled RAID1 mirror with only one member disk. Then do a clean install of the latest version of Proxmox, cluster the two servers and migrate over the VMs. Finally, I could add the second drive from the old server to the array as a member disk. This seems cumbersome but also the most correct approach? Will it work? Is it possible for example to later add a mirrored disk to an array like that? I don't know how the controller will know which disk contains the "true data" and which is meant to be overwritten?
2) Install Proxmox in the latest version on one of the 6 HDDs in the new server, then cluster, migrate over, then move both SSDs, arrange them in a RAID1 array as described above, and then move the entire content of the HDD to the SSD array (the SSDs are bigger). This seems super redundant, but may be needed if the old server cannot continue on one disk.
3) Move both SSDs over to the new server and take the daring leap of faith to clear SMART array configuration data from the disks. I didn't dare to do this such as to not risk any data loss, but I don't know if this is even a risk or not.
4) Other, much smarter approach, I didn't consider?
The server is not handling mission critical stuff, and while I'd prefer to avoid data loss and will obviously do backups prior, but downtime is ok.
Kind regards,
Nicolai
And sorry - I realise this question is asked frequently, but the threads I found have solutions that do not work for me or have different goals.
Now: Proxmox VE 7 on HPe ProLiant DL320e gen8 v2, 1proc/4cores/32Gb, 2xSSD 1,6 Tb HPe SFF MU disks, ZFS.
Future: Proxmox VE 7 on HPe ProLiant DL380 gen9, 2proc/32cores/64threads/256Gb, 2xSSD (from above), 6x1TB 7.200 rpm SFF HDDs, also ZFS. More storage to be added in further future.
The two servers use different RAID controllers. The new has a dedicated HPe RAID controller (P440, iirc) whereas the old has a integrated/software-ish solution (B120i, iirc). It seems disk arrays by these are not compatible.
Goal:
Either move existing VE + all VMs/LXCs to new server, fx by physically moving all disks,
or
Install Proxmoxin latest version on the new server and migrate all VMs/LXCs using backup/restore. (Is this allowed if source and target VE versions mismatch?)
Requirement: The two SSDs must be in a mirrored RAID1 hardware controlled array when finished, and must contain Proxmox and all VMs/LXC as today. The 6 HDDs are for storage purposes to be made available to one LXC. The storage must use ZFS when done, and must allow features like compression, etc.
So, this seems simple. Just move both SSDs to new server. According to many threads found here, that should work. It does not, likely because of RAID controller incompatibility. It simply does not detect the disks as bootable media, and the controller hides one of the disks from the OS as it contains SMART array configuration data.
I also tried just moving one disk. That won't work in the new server either. I have not tested if the old server will continue working with just one disk, although it should?
So now I'm wondering what best to do from here. I struggle to recognize which of these solutions will actually work, and obviously which will be the least involved for the best result.
For example, I'm struggling with grasping if using the hardware RAID controller is a good idea. According to tests it will vastly improve performance, and the write cache is battery/capacitor protected. But ZFS is meant to have direct access. I guess the controller would be in HBA mode and then all should be well, but I am not sure. I'm expecting all disks to be in some RAID or ZFS configuration that enables high read speeds (including compression), and network will be LACP 2x10Gb links. Backbone in the network is also 2 bonded 10G links, and selected clients are 10Gb links. So I expect read speeds to be the bottleneck and would like to optimise for this.
My own suggested solutions:
1) If the old server can keep going on just one disk, could I then clear the other disk, install it in the new server, setup a hardware controller controlled RAID1 mirror with only one member disk. Then do a clean install of the latest version of Proxmox, cluster the two servers and migrate over the VMs. Finally, I could add the second drive from the old server to the array as a member disk. This seems cumbersome but also the most correct approach? Will it work? Is it possible for example to later add a mirrored disk to an array like that? I don't know how the controller will know which disk contains the "true data" and which is meant to be overwritten?
2) Install Proxmox in the latest version on one of the 6 HDDs in the new server, then cluster, migrate over, then move both SSDs, arrange them in a RAID1 array as described above, and then move the entire content of the HDD to the SSD array (the SSDs are bigger). This seems super redundant, but may be needed if the old server cannot continue on one disk.
3) Move both SSDs over to the new server and take the daring leap of faith to clear SMART array configuration data from the disks. I didn't dare to do this such as to not risk any data loss, but I don't know if this is even a risk or not.
4) Other, much smarter approach, I didn't consider?
The server is not handling mission critical stuff, and while I'd prefer to avoid data loss and will obviously do backups prior, but downtime is ok.
Kind regards,
Nicolai