[SOLVED] Possibly the dumbest question

Decimus

Member
Dec 28, 2022
4
1
8
I am running PVE 8.4.19 on an HP ProLiant ML350 Gen10. Between the limited IPMI interface (subscription for a local IPMI is weird and I wont pay it) and the issues that I am having with the LOM and backplane, I am just over this host.

I am going to migrate my PVE instance over to a Dell PowerEdge T440 because I have one with a perpetual iDrac license, its LOM works without pain, and I can add more than 8 drives.

Since these two systems are on the same Generation of Intel chips (I will be pulling the 5218s that are in my HP and moving them to the new Dell) I was curious if there would be any major issues with just taking my drives out of the HP and moving them to the Dell. Both SAS adapters are HBA only so the only thing I can think of is platform specific drivers for iLo/iDrac.

I have moved RHel based disks between servers before with little to no issue but given that this is a Debian derivative system, I didn't know if there were any distro specific considerations.

I fully understand that this is not recommended. The CentOS guys were very upset with me last time. This is a lab Host and I am not super concerned about minor perf drops since this is a learning system, not a deployed system. Just trying to skip some steps in the migration because I have to take the thing offline anyway to swap parts around and put the new one into the rack.
 
Last edited:
You’ll probably be fine. On a standalone lab box, moving the disks to the new server is likely to work as long as boot mode, NIC naming, and storage references don’t change in a way your current install depends on. I wouldn’t expect a Debian/Proxmox-specific blocker here.

Also, what happened last time with CentOS? Was it an actual technical issue, or were they mostly unhappy because it’s not the “recommended” way to do it?
 
I wouldn't think so, but I am less familiar with Debian than I am RHel. I do run Ubuntu on my work laptop but I am used to doing actual server work in CentOS and ESXi.

As far as what happened last time, I got into a forum and asked a very similar question about trying to revive a dead host by transplanting drives. Had a mobo die and I didn't really want to, nor saw a need to buy a $800 replacement because I had more than a dozen servers of similar enough spec, from the same line, getting ready to be retired. The CentOS community guys were very upset about this for some reason and I was essentially told that if I can't afford to replace the mobo, then why am I working with servers? Back then, I was just getting into my lab work and learning the ropes and I guess I used some incorrect terminology somewhere but no matter how I cut it, they were just incredibly rude. I am a decade wiser now, but I am still not super familiar with low level items in Debian, so I rely on the expertise of others in some things still.

I appreciate your response, though! I will try to find some time this week to move all of my stuff over.

My biggest absolute fail right now is how much of my home network relies on this one box. HomeAssistant, Syslog, XDR/SIEM, Reverse Proxy, etc. Too many things on one box.