Cisco UCS install no luck

AHP

New Member
Apr 24, 2024
1
0
1
Hello,

Did anyone have luck with installing Proxmox on Cisco UCS B200 M5 or M6 blades? Really appreciate any guides, hints etc. I believe that the installation fails due to the lack of the Cisco drivers.

Thank you
 
Late to the conversation..

Did you ever get this sorted out? Encounter a specific error?
Did the blades have local storage, sas or sd cards?

The local storage has to be configured in the ucs server profile other wise the installer won't see it. Ran into this with esxi installs on M4, M5 blades.

Thanks
 
Probably a bit late to the note here.... BUT M5 B200 blades not a lot of issues, but, I've been jumping through some hoops before I go nuts and build the cluster completely:

- The M5's that I have do not have the SATA/SAS boards so I can't use SSD's as boot drives
- I went down the iSCSI boot disk road and got blocked at every turn. I don't know if it's me, Proxmox, SAN, or UCS that's the problem and I'm not well versed enough in the three to know which to point the finger at. I can tell you that after some time working with multipath, iscsiadm, and some other tools that I have it working GREAT once booted - but getting it there required using the two 64GB SD cards as a boot drive..... I'm stuck now trying to see if I can remap /etc/pve, swap, and /var/log to offload to iSCSI - and everything starts sooner than iSCSI and I end up in rescue mode more than a stable booted system... I want this to be as vanilla as possible so it's stable and easy to maintain....
- I didn't have to do anything special drivers wise, but, I don't know if that's because my use case is pretty darned basic.

Goal is cluster of these and two more M4 blades I haven't jumped into built into a cluster....
 
OK, have some good success to report back on after spending a lot of time on this..... vacations are nice for uninterrupted labbing...

All 6 hosts (M4 and M5 blades) are bootable off iSCSI without the use of any of the SD media. I have networking going strong, and was able to add my storage LUN's with multipath over iSCSI w/out issue. Network performance seems good. I did some tests for HA (still having to learn that) between hosts and that works nicely as well.

For someone who has never really played in the UCS arena it took longer than I thought it would to gather some of the concepts and put it together.

The best suggestion I have is that you won't find "this is how you do ucs with software a with product b" videos. Decouple the software / hardware - watch the part of the video's that apply to only the piece that you want.... To get iSCSI going I followed someone's youtube video on booting a dell server off a synology. Concepts same, hardware different, but it got me over the hill.
 
OK, have some good success to report back on after spending a lot of time on this..... vacations are nice for uninterrupted labbing...

All 6 hosts (M4 and M5 blades) are bootable off iSCSI without the use of any of the SD media. I have networking going strong, and was able to add my storage LUN's with multipath over iSCSI w/out issue. Network performance seems good. I did some tests for HA (still having to learn that) between hosts and that works nicely as well.

For someone who has never really played in the UCS arena it took longer than I thought it would to gather some of the concepts and put it together.

The best suggestion I have is that you won't find "this is how you do ucs with software a with product b" videos. Decouple the software / hardware - watch the part of the video's that apply to only the piece that you want.... To get iSCSI going I followed someone's youtube video on booting a dell server off a synology. Concepts same, hardware different, but it got me over the hill.
Would love to chat with you about this so is that a possibility? Working on this right now actually.
 
Feel free to post your questions here - I am more than happy to answer as time permits. The more people who can see it and work through the challenges - the more it will help others vs having a sidebar conversation.
 
Feel free to post your questions here - I am more than happy to answer as time permits. The more people who can see it and work through the challenges - the more it will help others vs having a sidebar conversation.
A few questions I have are as follows?

1. What did you vNic layout look like in the the service profile? Were the iSCSI vNics 0 and 1 or 2 and 3?
2. How did you handle MPIO or did you just use fabric failover?
3. Were there any special configuration adjustments you did on the UCS side?
4. Did you break out the networking like this 1iscsi, management, and then VM networks? Setting each on it own set of vNics or again did you just rely on fabric failover.


Just trying to get the most optimal configuration with redundancy so anything you are willing to share would be greatly appreciated.

Sincerely,
Bob Evans
 
Keeping in mind I am a complete UCS newb:

1) I don't know that I put them in a specific order - I just assigned the vNIC's to be iSCSI and then tied those to the boot order. That allowed me to boot it without issues.

https://www.youtube.com/watch?v=Lx5GJwCGUL0 is a youtube video that I used to help get the booting to work properly. It's for a Dell server, but, if you follow this along with some other videos on iSCSI booting UCS you can get this going.

2) MPIO - I did this after I completed the initial install.

3) As far as I know UCS was very vanilla again considering my general inexperience (and kind of learning as I go). I had to wipe and restart it a ton of times to get it right.

4) I have a group of people that I'm using for testing. Each has their own A/B mgmt nic and A/B traffic nic's. They all have their own VLAN's broken out assigned to their own NIC's for their own VM's. They share the iSCSI A/B Nic's attached to the storage (which is also redundant). These are then bonded in each linux system. I wrote a python script to automate the bonds w/failover creation based on mac's reported in UCS and the linux hosts.

Hope this helps?
 
Last edited:
I am just now diving into a Cisco UCS B200 M6 blade iSCSI boot proof of concept as a potential candidate for migration off VMWare.

Currently 200+ Cisco M5/M6 Blades all diskless booting from iSCSI Pure Storage Arrays. ESXi works out of the box, sees the Pure/UCS assigned iSCSI boot volume on install and maintains the connection to the target.

UCS Blade vNIC setup is as follows, was hoping to carry this over to Proxmox. On installation Proxmox sees all these NICs but no boot volumes.

vnic0-mgmt-A
vnic1-mgmt-B
vnic2-iscsi-A
vnic3-iscsi-B
vnic4-vmotion-A
vnic5-vmotion-B
vnic6-prod-A
vnic7-prod-B

Proxmox 8.4 install detects no hard disks, neither does Debian 12 (tried an on the top installation). Do we know what driver is required to see the iSCSI mounted volume? This is all I am missing is the driver?

Current Environment - Cisco UCS, VMWare, Pure Storage (FlashStack) running in Intersight. No local UCS Manager.
 
Last edited:
I was able to get iSCSI boot working following this post. (ignored the SFP config as it was not necessary for blade servers)
https://forum.proxmox.com/threads/install-pve-directly-on-iscsi-target.101750/

A few modifications were needed based on the fact my Management Interfaces needed VLAN tagging.

It appears that the UCS/Intersight profile configuration for iSCSI booting is almost ignored. I had to manually configure the Host IQN, Pure Storage Target IQN, and iSCSI interface IPs. All of that is passed to the installer during a VMWare vSphere Install. Once you manually set all these in Debug Install mode the primary GUI installer will see your install volume. From there it was a seamless process.

I plan on finding a way to script these installs but will manually configure 8 UCS Blades for now to do some validation testing.