HP Agentless Management Service

root@pve-dev:~# hpasmcli -s "show fan"
Great that it works for you. Really.

On my Product Name: ProLiant MicroServer Gen10 Plus with current PVE "hp-health" is just not installable because of those unsolved dependencies.

I just wanted to post a reference to some other guys having the exact same problem: https://phabricator.wikimedia.org/T300438 - perhaps they find a solution...


Edit/added: pre-installing some libs and forcing installation leads to:
Code:
~# hpasmcli -s "show fan"

ERROR: Failed to get SMBIOS system ID.
This does not seem to be the HP Proliant Server that you are looking for.
ERROR: hpasmcli only runs on HPE Proliant Servers.
 
Last edited:
Hi,

So, if I understand, these are the steps to take.

# Add hp Public keys
https://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub
https://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub
https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub

# Add HPE repo - apt_repository:
http://downloads.linux.hpe.com/SDR/repo/mcp {{ ansible_facts['lsb']['codename'] }}/current non-free

# Install all packages
ssa ssacli ssaducli storcli amsd

The keys seem to be deprecated.

# wget -q -O - https://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add -
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
gpg: no valid OpenPGP data found.
 
Last edited:
I just tested on my gen8's and I had forgot that amsd is for gen10 only. One need to install hp-ams instead. I tested it now and it works nicely.

So the steps would then be to do something like this:

You could probably just download the deb and install but if there are dependencies do follow through. There's a bunch of other "nice to have" tools installable from HPEs repo.

Install gnupg (most probably already installed)
Code:
apt-get update && apt-get install gnupg

Install the key. Regarding the deprecated bits you mentioned. It's not the keys that are deprecated it's the tool apt-key itself. You should start getting used to a different way now since that tool won't be around in the next Debian. Dunno in what Ubuntu version it will be gone permanently.

Install the needed key using gpg:
Code:
wget -O- https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | gpg --dearmor > /usr/share/keyrings/hpePublicKey2048-archive-keyring.gpg
The --dearmor bit is used since these keys are using ascii armor. You can check that with file command. If the output has something with "(old)" in it it's ascii armored. It's good practice to suffix the archive keyrings with "archive-keyring".

Now create an apt source file in /etc/apt/sources.list.d
Code:
echo "deb [signed-by=/usr/share/keyrings/hpePublicKey2048-archive-keyring.gpg] http://downloads.linux.hpe.com/SDR/repo/mcp bullseye/current non-free" > /etc/apt/sources.list.d/hpe.list
Note the "signed-by" part that now points to the public key we just downloaded. Note that this is Proxmox 7 which is bullseye. If on 6 it's buster i think? Correct me if i'm wrong. But anyways, modify to your needs.

Now install what you need, I always grab these, skip if not needed :) But at least do an apt-get update.
Code:
apt-get update && apt-get install ssa ssacli ssaducli storcli

And here's Agentless Management Service for gen9 and below
Code:
cd
wget https://downloads.linux.hpe.com/SDR/repo/mcp/debian/pool/non-free/hp-ams_2.6.2-2551.13_amd64.deb
dpkg -i hp-ams_2.6.2-2551.13_amd64.deb
Remove deb when it's installed if you don't want to keep it :)
Rich (BB code):
rm hp-ams_2.6.2-2551.13_amd64.deb

Gen10 and above
Code:
apt-get install amsd

Now watch as your Agentless Management Service indicator goes green :)

1649541575997.png
Hurray...


Cheers
Marcus
 
Last edited:
@agnon, dead on, worked just as oyu said. Thanks so much for your input on this.

BTW, mine are HP G8 sl230s blades. However, on mine, I don't seem to get info.
What I really wanted is to monitor things like power usage.

# ssacli
Smart Storage Administrator CLI 5.20.8.0
Detecting Controllers...Done.
Type "help" for a list of supported commands.
Type "exit" to close the console.

For some reason, all worked except one blade which always shows Not available.
I've restarted the ilo but not sure if there are some services with the above so have to start.

I just have to start reading up on this, thanks again.
 
Last edited:
Now watch as your Agentless Management Service indicator goes green

Great! Thank you - it works now for my Micro/Gen10 - everything's green. (Though hpasmcli -s "show fan" is not available.)

The multiple references to "hp-health" in several other posts led me into confusion...

Best regards
 
Last edited:
All of these things I just installed seem to be related only to disk information. I'm also looking for things like knowing how much power the server is using for example.

Maybe something didn't get installed right. I went back and looked at the history on my screen and noticed the following.

Code:
# apt-get install amsd

Reading package lists... Done

Building dependency tree... Done

Reading state information... Done

The following NEW packages will be installed:

  amsd

0 upgraded, 1 newly installed, 0 to remove and 65 not upgraded.

Need to get 1,181 kB of archives.

After this operation, 7,069 kB of additional disk space will be used.

Get:1 http://downloads.linux.hpe.com/SDR/repo/mcp bullseye/current/non-free amd64 amsd amd64 2.5.0-1675.1-debian11 [1,181 kB]

Fetched 1,181 kB in 1s (2,323 kB/s)

Selecting previously unselected package amsd.

(Reading database ... 43491 files and directories currently installed.)

Preparing to unpack .../amsd_2.5.0-1675.1-debian11_amd64.deb ...

Unpacking amsd (2.5.0-1675.1-debian11) ...

Setting up amsd (2.5.0-1675.1-debian11) ...

Created symlink /etc/systemd/system/multi-user.target.wants/amsd.service → /lib/systemd/system/amsd.service.

Created symlink /etc/systemd/system/multi-user.target.wants/smad.service → /lib/systemd/system/smad.service.

Created symlink /etc/systemd/system/multi-user.target.wants/ahslog.service → /lib/systemd/system/ahslog.service.

Created symlink /etc/systemd/system/multi-user.target.wants/mr_cpqScsi.service → /lib/systemd/system/mr_cpqScsi.service.

Created symlink /etc/systemd/system/multi-user.target.wants/cpqiScsi.service → /lib/systemd/system/cpqiScsi.service.

Created symlink /etc/systemd/system/multi-user.target.wants/cpqScsi.service → /lib/systemd/system/cpqScsi.service.

Created symlink /etc/systemd/system/multi-user.target.wants/cpqIde.service → /lib/systemd/system/cpqIde.service.

Job for cpqScsi.service failed because a fatal signal was delivered causing the control process to dump core.

See "systemctl status cpqScsi.service" and "journalctl -xe" for details.

Processing triggers for man-db (2.9.4-2) ...


# systemctl status cpqScsi.service

● cpqScsi.service - cpqScsi MIB handler.

     Loaded: loaded (/lib/systemd/system/cpqScsi.service; enabled; vendor preset: enabled)

     Active: failed (Result: core-dump) since Sat 2022-04-09 17:38:27 MST; 22h ago

    Process: 3363068 ExecStart=/sbin/cpqScsi -f $OPTIONS (code=dumped, signal=SEGV)

   Main PID: 3363068 (code=dumped, signal=SEGV)

        CPU: 9ms


Apr 09 17:38:27 pro02 systemd[1]: cpqScsi.service: Scheduled restart job, restart counter is at 6.

Apr 09 17:38:27 pro02 systemd[1]: Stopped cpqScsi MIB handler..

Apr 09 17:38:27 pro02 systemd[1]: cpqScsi.service: Start request repeated too quickly.

Apr 09 17:38:27 pro02 systemd[1]: cpqScsi.service: Failed with result 'core-dump'.

Apr 09 17:38:27 pro02 systemd[1]: Failed to start cpqScsi MIB handler..


# systemctl start cpqScsi.service

Job for cpqScsi.service failed because a fatal signal was delivered causing the control process to dump core.

See "systemctl status cpqScsi.service" and "journalctl -xe" for details.

root@pro02:/new# systemctl status cpqScsi.service

● cpqScsi.service - cpqScsi MIB handler.

     Loaded: loaded (/lib/systemd/system/cpqScsi.service; enabled; vendor preset: enabled)

     Active: failed (Result: core-dump) since Sun 2022-04-10 16:08:43 MST; 1s ago

    Process: 1771143 ExecStart=/sbin/cpqScsi -f $OPTIONS (code=dumped, signal=SEGV)

   Main PID: 1771143 (code=dumped, signal=SEGV)

        CPU: 9ms


Apr 10 16:08:43 pro02 systemd[1]: cpqScsi.service: Scheduled restart job, restart counter is at 5.

Apr 10 16:08:43 pro02 systemd[1]: Stopped cpqScsi MIB handler..

Apr 10 16:08:43 pro02 systemd[1]: cpqScsi.service: Start request repeated too quickly.

Apr 10 16:08:43 pro02 systemd[1]: cpqScsi.service: Failed with result 'core-dump'.

Apr 10 16:08:43 pro02 systemd[1]: Failed to start cpqScsi MIB handler..


# journalctl -xe

Apr 10 16:08:55 pro02 smad[1772243]: [NOTICE]: Init iLO Socket in Regular mode with CCB

Apr 10 16:08:55 pro02 smad[1772243]: [INFO  ]: BMC device is /dev/hpilo/d0ccb0

Apr 10 16:08:55 pro02 smad[1772243]: [ERR   ]: ERR  : iLO4 is not supported

Apr 10 16:08:55 pro02 smad[3362244]: [INFO  ]: Got a new socket connection

Apr 10 16:08:55 pro02 smad[1772244]: [NOTICE]: Init iLO Socket in Regular mode with CCB

Apr 10 16:08:55 pro02 smad[1772244]: [INFO  ]: BMC device is /dev/hpilo/d0ccb0

Apr 10 16:08:55 pro02 smad[3362244]: [INFO  ]: Got a new socket connection
 
All of these things I just installed seem to be related only to disk information. I'm also looking for things like knowing how much power the server is using for example.

Maybe something didn't get installed right. I went back and looked at the history on my screen and noticed the following.

Code:
# apt-get install amsd

Reading package lists... Done

Building dependency tree... Done

Reading state information... Done

The following NEW packages will be installed:

  amsd

0 upgraded, 1 newly installed, 0 to remove and 65 not upgraded.

Need to get 1,181 kB of archives.

After this operation, 7,069 kB of additional disk space will be used.

Get:1 http://downloads.linux.hpe.com/SDR/repo/mcp bullseye/current/non-free amd64 amsd amd64 2.5.0-1675.1-debian11 [1,181 kB]

Fetched 1,181 kB in 1s (2,323 kB/s)

Selecting previously unselected package amsd.

(Reading database ... 43491 files and directories currently installed.)

Preparing to unpack .../amsd_2.5.0-1675.1-debian11_amd64.deb ...

Unpacking amsd (2.5.0-1675.1-debian11) ...

Setting up amsd (2.5.0-1675.1-debian11) ...

Created symlink /etc/systemd/system/multi-user.target.wants/amsd.service → /lib/systemd/system/amsd.service.

Created symlink /etc/systemd/system/multi-user.target.wants/smad.service → /lib/systemd/system/smad.service.

Created symlink /etc/systemd/system/multi-user.target.wants/ahslog.service → /lib/systemd/system/ahslog.service.

Created symlink /etc/systemd/system/multi-user.target.wants/mr_cpqScsi.service → /lib/systemd/system/mr_cpqScsi.service.

Created symlink /etc/systemd/system/multi-user.target.wants/cpqiScsi.service → /lib/systemd/system/cpqiScsi.service.

Created symlink /etc/systemd/system/multi-user.target.wants/cpqScsi.service → /lib/systemd/system/cpqScsi.service.

Created symlink /etc/systemd/system/multi-user.target.wants/cpqIde.service → /lib/systemd/system/cpqIde.service.

Job for cpqScsi.service failed because a fatal signal was delivered causing the control process to dump core.

See "systemctl status cpqScsi.service" and "journalctl -xe" for details.

Processing triggers for man-db (2.9.4-2) ...


# systemctl status cpqScsi.service

● cpqScsi.service - cpqScsi MIB handler.

     Loaded: loaded (/lib/systemd/system/cpqScsi.service; enabled; vendor preset: enabled)

     Active: failed (Result: core-dump) since Sat 2022-04-09 17:38:27 MST; 22h ago

    Process: 3363068 ExecStart=/sbin/cpqScsi -f $OPTIONS (code=dumped, signal=SEGV)

   Main PID: 3363068 (code=dumped, signal=SEGV)

        CPU: 9ms


Apr 09 17:38:27 pro02 systemd[1]: cpqScsi.service: Scheduled restart job, restart counter is at 6.

Apr 09 17:38:27 pro02 systemd[1]: Stopped cpqScsi MIB handler..

Apr 09 17:38:27 pro02 systemd[1]: cpqScsi.service: Start request repeated too quickly.

Apr 09 17:38:27 pro02 systemd[1]: cpqScsi.service: Failed with result 'core-dump'.

Apr 09 17:38:27 pro02 systemd[1]: Failed to start cpqScsi MIB handler..


# systemctl start cpqScsi.service

Job for cpqScsi.service failed because a fatal signal was delivered causing the control process to dump core.

See "systemctl status cpqScsi.service" and "journalctl -xe" for details.

root@pro02:/new# systemctl status cpqScsi.service

● cpqScsi.service - cpqScsi MIB handler.

     Loaded: loaded (/lib/systemd/system/cpqScsi.service; enabled; vendor preset: enabled)

     Active: failed (Result: core-dump) since Sun 2022-04-10 16:08:43 MST; 1s ago

    Process: 1771143 ExecStart=/sbin/cpqScsi -f $OPTIONS (code=dumped, signal=SEGV)

   Main PID: 1771143 (code=dumped, signal=SEGV)

        CPU: 9ms


Apr 10 16:08:43 pro02 systemd[1]: cpqScsi.service: Scheduled restart job, restart counter is at 5.

Apr 10 16:08:43 pro02 systemd[1]: Stopped cpqScsi MIB handler..

Apr 10 16:08:43 pro02 systemd[1]: cpqScsi.service: Start request repeated too quickly.

Apr 10 16:08:43 pro02 systemd[1]: cpqScsi.service: Failed with result 'core-dump'.

Apr 10 16:08:43 pro02 systemd[1]: Failed to start cpqScsi MIB handler..


# journalctl -xe

Apr 10 16:08:55 pro02 smad[1772243]: [NOTICE]: Init iLO Socket in Regular mode with CCB

Apr 10 16:08:55 pro02 smad[1772243]: [INFO  ]: BMC device is /dev/hpilo/d0ccb0

Apr 10 16:08:55 pro02 smad[1772243]: [ERR   ]: ERR  : iLO4 is not supported

Apr 10 16:08:55 pro02 smad[3362244]: [INFO  ]: Got a new socket connection

Apr 10 16:08:55 pro02 smad[1772244]: [NOTICE]: Init iLO Socket in Regular mode with CCB

Apr 10 16:08:55 pro02 smad[1772244]: [INFO  ]: BMC device is /dev/hpilo/d0ccb0

Apr 10 16:08:55 pro02 smad[3362244]: [INFO  ]: Got a new socket connection

You installed amsd. That's only for Gen10. But looking at your previous posts you had G8's right? So you should have skipped that step and installed the hp-ams which is for Gen9 and earlier.

But one quick and easy thing to do since the hp-health package isn't well updated. Use IPMI. If you want something CLI related. Otherwise i'd use something like SNMP. Since the ILO4 supports that. You could for instance install prometheus snmp-exporter on a node and scrape all your servers and then create awesome graphs in grafana and once the data is in prometheus you can manage your alerts.

But as far as ipmi goes here's an example (make sure ipmi is activated in access settings):

1. create a user (for instance ipmi, but this is optional, you could go with any user already present :) ), i've only gotten this to work if i select all privileges so set a difficult password
2. install ipmitool on any node that has access to your ilo over the network.
apt-get install ipmitool
3. run:
ipmitool -H address_to_ilo -I lanplus -U username_you_chose -P super_difficult_password -p 6623 sensor list
note I run ipmi on 6623 which isn't the default, therefore i specify -p
This will give you something like:
Code:
UID Light        | 0x0        | discrete   | 0x0180| na        | na        | na        | na        | na        | na
Sys. Health LED  | na         | discrete   | na    | na        | na        | na        | na        | na        | na
01-Inlet Ambient | 20.000     | degrees C  | ok    | na        | na        | na        | 40.000    | 42.000    | 46.000
02-CPU 1         | 40.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | na
03-CPU 2         | 40.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | na
04-P1 DIMM 1-6   | na         |            | na    | na        | na        | na        | na        | 87.000    | na
05-P1 DIMM 7-12  | 32.000     | degrees C  | ok    | na        | na        | na        | na        | 87.000    | na
06-P2 DIMM 1-6   | na         |            | na    | na        | na        | na        | na        | 87.000    | na
07-P2 DIMM 7-12  | 32.000     | degrees C  | ok    | na        | na        | na        | na        | 87.000    | na
08-P1 Mem Zone   | 28.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
09-P1 Mem Zone   | 32.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
10-P2 Mem Zone   | 29.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
11-P2 Mem Zone   | 27.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
12-HD Max        | 35.000     | degrees C  | ok    | na        | na        | na        | na        | 60.000    | na
13-Chipset 1     | 44.000     | degrees C  | ok    | na        | na        | na        | na        | 105.000   | na
14-Chipset1 Zone | 33.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
15-P/S 1 Inlet   | 25.000     | degrees C  | ok    | na        | na        | na        | na        | na        | na
16-P/S 1 Zone    | 28.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
17-P/S 2 Inlet   | 30.000     | degrees C  | ok    | na        | na        | na        | na        | na        | na
18-P/S 2 Zone    | 29.000     | degrees C  | ok    | na        | na        | na        | na        | 65.000    | 70.000
19-PCI #1        | na         |            | na    | na        | na        | na        | na        | 100.000   | na
20-PCI #2        | na         |            | na    | na        | na        | na        | na        | 100.000   | na
21-VR P1         | 30.000     | degrees C  | ok    | na        | na        | na        | na        | 115.000   | 120.000
22-VR P2         | 33.000     | degrees C  | ok    | na        | na        | na        | na        | 115.000   | 120.000
23-VR P1 Mem     | 25.000     | degrees C  | ok    | na        | na        | na        | na        | 115.000   | 120.000
24-VR P1 Mem     | 25.000     | degrees C  | ok    | na        | na        | na        | na        | 115.000   | 120.000
25-VR P2 Mem     | 26.000     | degrees C  | ok    | na        | na        | na        | na        | 115.000   | 120.000
26-VR P2 Mem     | 26.000     | degrees C  | ok    | na        | na        | na        | na        | 115.000   | 120.000
27-VR P1Mem Zone | 25.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
28-VR P1Mem Zone | 24.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
29-VR P2Mem Zone | 26.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
30-VR P2Mem Zone | 26.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
31-HD Controller | 64.000     | degrees C  | ok    | na        | na        | na        | na        | 105.000   | na
32-HD Cntlr Zone | 39.000     | degrees C  | ok    | na        | na        | na        | na        | 65.000    | 70.000
33-PCI 1 Zone    | 33.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
34-PCI 1 Zone    | 33.000     | degrees C  | ok    | na        | na        | na        | na        | 66.000    | 71.000
35-LOM Card      | 48.000     | degrees C  | ok    | na        | na        | na        | na        | 100.000   | na
36-PCI 2 Zone    | 37.000     | degrees C  | ok    | na        | na        | na        | na        | 65.000    | 70.000
37-System Board  | 34.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
38-System Board  | 31.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
39-Sys Exhaust   | 33.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
40-Sys Exhaust   | 33.000     | degrees C  | ok    | na        | na        | na        | na        | 70.000    | 75.000
41-Sys Exhaust   | 32.000     | degrees C  | ok    | na        | na        | na        | na        | 64.000    | 69.000
42-SuperCAP Max  | 22.000     | degrees C  | ok    | na        | na        | na        | na        | 65.000    | na
Fan Block 1      | 20.384     | percent    | ok    | na        | na        | na        | na        | na        | na
Fan Block 2      | 20.384     | percent    | ok    | na        | na        | na        | na        | na        | na
Fan Block 3      | 20.384     | percent    | ok    | na        | na        | na        | na        | na        | na
Fan Block 4      | 20.384     | percent    | ok    | na        | na        | na        | na        | na        | na
Fan Block 5      | 19.992     | percent    | ok    | na        | na        | na        | na        | na        | na
Fan Block 6      | 19.992     | percent    | ok    | na        | na        | na        | na        | na        | na
Fan Block 7      | 19.992     | percent    | ok    | na        | na        | na        | na        | na        | na
Fan Block 8      | 19.992     | percent    | ok    | na        | na        | na        | na        | na        | na
Power Supply 1   | 35         | Watts      | ok    | na        | na        | na        | na        | na        | na
Power Supply 2   | 95         | Watts      | ok    | na        | na        | na        | na        | na        | na
Power Meter      | 106        | Watts      | ok    | na        | na        | na        | na        | na        | na
Power Supplies   | 0x0        | discrete   | 0x0180| na        | na        | na        | na        | na        | na
Fans             | 0x0        | discrete   | 0x0180| na        | na        | na        | na        | na        | na
Memory           | 0x0        | discrete   | 0x4080| na        | na        | na        | na        | na        | na
C1 P1I Bay 1     | 0x1        | discrete   | 0x0180| na        | na        | na        | na        | na        | na
C1 P1I Bay 2     | 0x1        | discrete   | 0x0180| na        | na        | na        | na        | na        | na
C1 P1I Bay 3     | 0x1        | discrete   | 0x0180| na        | na        | na        | na        | na        | na
C1 P2I Bay 5     | 0x1        | discrete   | 0x0180| na        | na        | na        | na        | na        | na
C1 P2I Bay 7     | 0x1        | discrete   | 0x0180| na        | na        | na        | na        | na        | na
C1 P2I Bay 8     | 0x1        | discrete   | 0x0180| na        | na        | na        | na        | na        | na

with that data you could write some bash script or python script to do whatever. It's not superfast to use ipmi so maybe not poll every second :)

Best regards
Marcus
 
Last edited:
  • Like
Reactions: Proximate
That is some amazing information Marcus. Thank you so much.

I'll file this away and give it a try as soon as I get some time.
One thing that might be useful is knowing how to uninstall all this as well.
I'm guessing others looking will find this, not read to the end and install a bunch of things they may or may not have wanted.

Yes, I thought I read that it works on Gen8's as well.
I guess I don't really need all that stuff since all I really wanted to monitor was fans/power supplies of the hardware.
Proxmox seems to do a good job of showing potential problems and iLO also sends notices.
 
ive done some research and have figured out a partial solution for why you get apt-get install hp-ams. (HPE proliant g8)

you need to use dist/project_ver = stretch/10.62
i got this info right from the hp repository site https://downloads.linux.hpe.com/SDR/project/mcp/
NOTICE: The health/snmp functionality was moved to the iLO card on HPE ProLiant Gen10 servers. The hp-health, hp-snmp-agents, hp-smh* and hp-ams debs are only to be installed on Gen9 servers and earlier. Gen10 users, please subscribe to "11.xx" or "current" repositories. Gen9 users, please use "10.xx" or earlier.

echo "deb [signed-by=/usr/share/keyrings/hpePublicKey2048-archive-keyring.gpg] http://downloads.linux.hpe.com/SDR/repo/mcp stretch/10.62 non-free" > /etc/apt/sources.list.d/hpe.list

to test use:
apt-cache search hp-ams

the system to should return:
hp-ams - Agentless Management Service for HP ProLiant servers with iLO4

go ahead and install the package:
apt-get install hp-ams

check version: apt list hp-ams should return: hp-ams/stretch,now 2.6.2-2551.13 amd64 [installed]

now where i get hung up on is trying to install the hp-snmp-agents & hp-health.
the system returns: The following packages have unmet dependencies: hp-health : Depends: libc6-i686 but it is not installable or lib32gcc1 but it is not installable

I've tried installing the dependency's: libc6-i686 is not available, lib32gcc1=However the following packages replace it: lib32gcc-s1
so installed the lib32gcc-s1 dependency.

tried installing the hp-health package again, same result: The following packages have unmet dependencies: hp-health : Depends: libc6-i686 but it is not installable or lib32gcc1 but it is not installable

any idea would be helpful.
 
Seems like you got further along than I did. I didn't pursue this any further since these are production servers and I decided that the monitoring was not worth breaking a host. I'll try to get back to this next time I fire up a non production blade. Maybe I can add to it at that point.

Thank you very much for sharing your experience so far.
 
nice ! once amsd installed
fans reduce to 11% from 43% on HP ProLiant DL325 Gen10 (PVE 7.0 kernel 5.11)
and reduce to 8% from 68% on a HP ProLiant DL20 Gen 10 (PVE 7.2 kernel 5.15)
#nano /etc/apt/sources.list.d/hp-mcp.list deb http://downloads.linux.hpe.com/SDR/repo/mcp bullseye/current non-free
Add Keys :
curl https://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl https://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - apt update ; apt install amsd
wait up to few minutes
 
Last edited:
nice ! once amsd installed
fans reduce to 11% from 43% on HP ProLiant DL325 Gen10 (PVE 7.0 kernel 5.11)
and reduce to 8% from 68% on a HP ProLiant DL20 Gen 10 (PVE 7.2 kernel 5.15)
#nano /etc/apt/sources.list.d/hp-mcp.list deb http://downloads.linux.hpe.com/SDR/repo/mcp bullseye/current non-free
Add Keys :
curl https://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl https://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - apt update ; apt install amsd
wait up to few minutes
Seems not longer work in the latest kernel.
My Fans around 40% on my DL20 Gen10 with amsd installed. Before everything was fine
 
Hello,

For the unmet dependency with hp-health on Gen8 I found a workaround :

https://unix.stackexchange.com/ques...unfulfilled-dependencies-of-installed-package

Code:
apt install equivs

equivs-control lib32gcc1.control
sed -i 's/<package name; defaults to equivs-dummy>/lib32gcc1/g' lib32gcc1.control
equivs-build lib32gcc1.control
dpkg -i lib32gcc1_1.0_all.deb

equivs-control libc6-i686.control
sed -i 's/<package name; defaults to equivs-dummy>/libc6-i686/g' libc6-i686.control
equivs-build libc6-i686.control
dpkg -i libc6-i686_1.0_all.deb

It will create/install dummy packages and dependenties will be met.

Code:
/tmp# dpkg -i hp-health_10.80-1874.10_amd64.deb
Selecting previously unselected package hp-health.
(Reading database ... 168570 files and directories currently installed.)
Preparing to unpack hp-health_10.80-1874.10_amd64.deb ...
Unpacking hp-health (10.80-1874.10) ...
Setting up hp-health (10.80-1874.10) ...
Processing triggers for libc-bin (2.31-13+deb11u3) ...
Processing triggers for man-db (2.9.4-2) ...
/tmp# dpkg -i hp-snmp-agents_10.60-2953.16_amd64.deb
Selecting previously unselected package hp-snmp-agents.
(Reading database ... 168611 files and directories currently installed.)
Preparing to unpack hp-snmp-agents_10.60-2953.16_amd64.deb ...
Unpacking hp-snmp-agents (10.60-2953.16) ...
Setting up hp-snmp-agents (10.60-2953.16) ...
Processing triggers for man-db (2.9.4-2) ...
/tmp#
 
  • Like
Reactions: chipbreak
I just tested on my gen8's and I had forgot that amsd is for gen10 only. One need to install hp-ams instead. I tested it now and it works nicely.

So the steps would then be to do something like this:

You could probably just download the deb and install but if there are dependencies do follow through. There's a bunch of other "nice to have" tools installable from HPEs repo.

Install gnupg (most probably already installed)
Code:
apt-get update && apt-get install gnupg

Install the key. Regarding the deprecated bits you mentioned. It's not the keys that are deprecated it's the tool apt-key itself. You should start getting used to a different way now since that tool won't be around in the next Debian. Dunno in what Ubuntu version it will be gone permanently.

Install the needed key using gpg:
Code:
wget -O- https://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | gpg --dearmor > /usr/share/keyrings/hpePublicKey2048-archive-keyring.gpg
The --dearmor bit is used since these keys are using ascii armor. You can check that with file command. If the output has something with "(old)" in it it's ascii armored. It's good practice to suffix the archive keyrings with "archive-keyring".

Now create an apt source file in /etc/apt/sources.list.d
Code:
echo "deb [signed-by=/usr/share/keyrings/hpePublicKey2048-archive-keyring.gpg] http://downloads.linux.hpe.com/SDR/repo/mcp bullseye/current non-free" > /etc/apt/sources.list.d/hpe.list
Note the "signed-by" part that now points to the public key we just downloaded. Note that this is Proxmox 7 which is bullseye. If on 6 it's buster i think? Correct me if i'm wrong. But anyways, modify to your needs.

Now install what you need, I always grab these, skip if not needed :) But at least do an apt-get update.
Code:
apt-get update && apt-get install ssa ssacli ssaducli storcli

And here's Agentless Management Service for gen9 and below
Code:
cd
wget https://downloads.linux.hpe.com/SDR/repo/mcp/debian/pool/non-free/hp-ams_2.6.2-2551.13_amd64.deb
dpkg -i hp-ams_2.6.2-2551.13_amd64.deb
Remove deb when it's installed if you don't want to keep it :)
Rich (BB code):
rm hp-ams_2.6.2-2551.13_amd64.deb

Gen10 and above
Code:
apt-get install amsd

Now watch as your Agentless Management Service indicator goes green :)

View attachment 35870
Hurray...


Cheers
Marcus

This worked perfectly for me.

The only question I have now is, how the hell do I use the AMS? :)
 
There are so many ways posted in this thread that it's really not clear what works, what doesn't, what is actually needed, what is not.
The problem is that those who find this will install based on one post to find that the next says don't do that or they have installed a bunch of stuff they didn't need. They will end up with a mess or potentially conflicting packages and information.

Lots of great input but no real final post showing here's exactly how it's done for gen8, gen10.
I wish someone from HPE would take the time to pull all this together and help those of us who are still running older hardware.
 
  • Like
Reactions: chipbreak
There are so many ways posted in this thread that it's really not clear what works, what doesn't, what is actually needed, what is not.
The problem is that those who find this will install based on one post to find that the next says don't do that or they have installed a bunch of stuff they didn't need. They will end up with a mess or potentially conflicting packages and information.

Lots of great input but no real final post showing here's exactly how it's done for gen8, gen10.
I wish someone from HPE would take the time to pull all this together and help those of us who are still running older hardware.
Personally what’s described in post #31 worked for my dl20 gen10.
Fans are still loud when increasing the rounds but now most of the time they idle at 6-7%
 
  • Like
Reactions: chipbreak
Post #31 also works for a DL380 Gen10, Proxmox 7.2-11
I installed the raid manager:
apt install ssa ssacli ssaducli storcli

Agentless Management Service is green on iLo and I can view/manage my raid volumes
 
Has anyone by any chance upgraded to 7.3 from 7.2 and can confirm that ASM still reports after the update and the fans are not spinning mad?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!