I have 2 proxmox hosts in a cluster, and I'd like to retire the second server. I've found instructions on how to remove a host from a cluster, but I'm not sure if that applies if it's 1 of the last 2 remaining. The surviving host has 2 vm's on it that it is important I keep up and running...
Should I take the 240 out of HBA and setup RAID 10 via the controller? I understand I'll lose shadow copy, but is there anything else I need to worry about? I've seen concerns around performance issues.
For the life of me, I can't figure out how to set the boot order. The H240 raid controller is in HBA mode so it's not seen in the ACU. The bios only lists the controller itself, not individual drives. I was thinking about just installing it and changing the drive order until it works, is that...
Wouldn't the installer handle that portion though? I was under the impression you could setup a ZFS pool for the boot drive. Would I need to install on a 5th drive and use the RAID 10 pool for my applications and data then?
I just checked and DL360p Gen 8 doesn't have UEFI.
From what I understand, you boot from the pool? I just select all 4 drives regardless of the configuration (RAID 10, 0, 1) and let the installer handle the rest. is there a configuration I'm missing here?
Hello,
I installed proxmox on a DL360p with ZFS RAID 10 4 x Kingston DC450R 960GB SSD's running on a h240 in HBA mode. After installation, on first boot, I'm getting the following error:
I saw a similar post with a similar issue and ran the same commands...
Not sure what else to do...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.