Unable to add storage, RAID 50 4.6TB, Dell R720

rcrosier

New Member
Jun 18, 2024
16
0
1
Hello,

I'm semi-retired and Very new to Proxmox, been managing/running about 15 VMs on 2-3 VMWare host servers at two locations in a company since about 2014 with no problems. We're trying to repurpose an old Dell R720 to try Proxmox out. I installed Proxmox (twice), trying to get an array of 6 1.2 TB drives configured in RAID 50 into Proxmox, but I'm stuck. I cannot find any documentation that helps yet.

First install of Proxmox, I did not have these 6 drives, only the 500GB SSD that Proxmox is installed on. That went fine and I was able to install a couple VMs on that no problem, they ran fine for a couple weeks while I waited for the 1.2TB drives. Could not add any storage for the 1.2TB though, and it showed as GPT No, even after I initialized it for GPT, so I decided to start over.

On the second install of Proxmox, the 4.6TB storage showed up as SDA. Tried to create an LVM and it seemed to start out OK, but then failed, and then the storage is no longer even showing.

Is there documentation or anything I can read with steps to add a storage array to Proxmox?

I'd love to know what step(s) I'm missing. I'm not very well-versed in linux, but can kind of get around in it.

Thanks.
 
Hi @rcrosier, welcome to the forum.

The main thing to keep in mind is that Proxmox uses Ubuntu-derived kernel with Debian userland. That means that the very basic layer responsible for storage is the same as any other similar/most Linux systems. Meaning that there is nothing PVE-specific.

Unfortunately, your post does not contain much technical information, ie what the system saw, what errors you received, etc. It's possible you have a hardware problem and nothing you did in software was wrong.

I recommend going to the basics: check the connectivity, review the output of "dmesg" and "journalctl -b0", review and report the output of "lsblk", and install and review the output of "lsscsi". If all of the above checks out, move on to the LVM layer.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @rcrosier, welcome to the forum.

The main thing to keep in mind is that Proxmox uses Ubuntu-derived kernel with Debian userland. That means that the very basic layer responsible for storage is the same as any other similar/most Linux systems. Meaning that there is nothing PVE-specific.

Unfortunately, your post does not contain much technical information, ie what the system saw, what errors you received, etc. It's possible you have a hardware problem and nothing you did in software was wrong.

I recommend going to the basics: check the connectivity, review the output of "dmesg" and "journalctl -b0", review and report the output of "lsblk", and install and review the output of "lsscsi". If all of the above checks out, move on to the LVM layer.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Thanks for the reply. I'm just getting back to this now... low on my list of priorities, lol.

I don't believe there's a hardware problem, because all the drives worked fine when I boot to the drives that have the old VMWare on them, and I was able to configure the entire 4.6TB for VMWare. I was able to create two VMs on them, a Win2019 server and a Debian Linux server. I removed those, disconnected/removed the storage in VMWare before booting into the Proxmox and beginning to work on that. The RAID drives are all visible and operating properly when I view them from the RAID controller and WERE showing properly in Proxmox before I tried to initialize them... then they disappeared when that failed.

So for reference, this dell server has two (RAID1) 146GB SAS drives configured with VMWare on them to boot from. It also (now) has a 512GB SSD boot drive in place of the CD Rom that boots into Proxmox. When I boot that, I am just removing the two 146GB drives, so there is one SSD and 6 1.2TB SAS drives. The server has 256GB RAM.

Before initialization, lsblk showed the drives. After the initialization failed, lsblk does not show them. After powering it down the other day, I just powered it back up and checked the RAID manager (photo attached), and they are still showing as healthy/online. I then restarted the Proxmox and they now show in disks, but I don't know enough from here, how to make use of them. I'm afraid that if I try to create anything, I'll trash them again, or do the wrong thing, as I know next to nothing about Proxmox at this point... Where can I read/find information on what I need to do now?

Unfortunately, I don't know enough about Linux (any flavor) to manually initialize or partition the drives, so I'm hoping that I can do that with Proxmox.

This is a "homelab" type test server that I'm setting up, to see how Proxmox works and learn about it.

Thanks again!
 

Attachments

  • ProxmoxDisks-sm.jpg
    ProxmoxDisks-sm.jpg
    31.3 KB · Views: 13
  • RAID-sm.jpg
    RAID-sm.jpg
    42.5 KB · Views: 12
More information... For some reason, linux is reporting the RAID array in some places, but then not in others... when I run 'smartctl -a /dev/sda' it says that it cannot find the device. It shows everything fine with /dev/sdb. But /dev/sda shows up in lsblk...??? See below:
root@dellProxmox:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 4.4T 0 disk
sdb 8:16 0 476.9G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 475.9G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 3.6G 0 lvm
│ └─pve-data 252:4 0 348.8G 0 lvm
└─pve-data_tdata 252:3 0 348.8G 0 lvm
└─pve-data 252:4 0 348.8G 0 lvm

root@dellProxmox:~# smartctl -a /dev/sdb
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.8.4-2-pve] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model: TEAM T253512GB
Serial Number: TPBF2405100030103193
Firmware Version: SBFM61.5
User Capacity: 512,110,190,592 bytes [512 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
TRIM Command: Available
Device is: Not in smartctl database 7.3/5319
ATA Version is: ACS-4 (minor revision not indicated)
SATA Version is: SATA 3.2, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Fri Aug 23 06:15:29 2024 HDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (65535) seconds.
Offline data collection
capabilities: (0x79) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 30) minutes.
Conveyance self-test routine
recommended polling time: ( 6) minutes.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 938
12 Power_Cycle_Count 0x0012 100 100 000 Old_age Always - 29
168 Unknown_Attribute 0x0012 100 100 000 Old_age Always - 0
170 Unknown_Attribute 0x0003 100 100 000 Pre-fail Always - 95
173 Unknown_Attribute 0x0012 100 100 000 Old_age Always - 1
192 Power-Off_Retract_Count 0x0012 100 100 000 Old_age Always - 26
194 Temperature_Celsius 0x0023 067 067 000 Pre-fail Always - 33 (Min/Max 33/33)
218 Unknown_Attribute 0x000b 100 100 050 Pre-fail Always - 0
231 Unknown_SSD_Attribute 0x0013 100 100 000 Pre-fail Always - 100
241 Total_LBAs_Written 0x0012 100 100 000 Old_age Always - 85

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

root@dellProxmox:~# smartctl -a /dev/sda
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.8.4-2-pve] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

Smartctl open device: /dev/sda failed: No such device or address
 
You won't get anything useful out of smartctl on a raid array that's being presented to the OS as a SCSI device.
 
More information... For some reason, linux is reporting the RAID array in some places, but then not in others... when I run 'smartctl -a /dev/sda' it says that it cannot find the device. It shows everything fine with /dev/sdb. But /dev/sda shows up in lsblk...??? See below:
the SMART stats are per physical device, so its impossible for a RAID controller to somehow aggregate and present them to you. The error may not be very descriptive, but that tool was not made to be run against a RAID controller.

The lsblk reports SDA device, so the system sees it. There are no partitions on it, so you should just follow these steps https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)

If something is not working, feel free to add the exact command and full output as text encoded with CODE tags.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
First your "H310 mini" is a raid controller without cache and raid5/raid50 is horrible und you will not happy with pve then in total too which isn't desirable. If you want to use zfs you should look for flashing the H310 to IT mode first and use probaply zfs mirror pool.
If you want to go without zfs and with the H310 only raid10 makes sense or even better buy first a used H710p for just a few bucks.
Further you should decide if you want to run with block storage (lvm/lvm-thin) or file storage (ext4/xfs/btrfs/nfs).
Mostly all here use block storage ... why ever they think and it's the pve default but we (in our company) run file storage as nfs and even local, test here and there again block ... but no, me/we are and stay file storage lover - sorry to all but no problem as it's all functioning each with it's pros and cons. Think a little bit above.
In pve web-ui left on top select datacenter then right storage and then there's add ...
A virtual disk behind a raid ctrl. doesn't show smart values and you need to install Dell (download) perccli tool for that into pve.
And don't be afraid which storage way you want to go as you could have all at the same time and you could ever switch also.
 
Last edited:
the SMART stats are per physical device, so its impossible for a RAID controller to somehow aggregate and present them to you. The error may not be very descriptive, but that tool was not made to be run against a RAID controller.

The lsblk reports SDA device, so the system sees it. There are no partitions on it, so you should just follow these steps https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)

If something is not working, feel free to add the exact command and full output as text encoded with CODE tags.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
OK, so I did just (before you posted this), try to run the SGDISK command in that same article, which I had just found...
sgdisk -N 1 /dev/sda

It seemed to hang for a VERY long time, then I got a handful of error emails. See below:

(first email)
This message was generated by the smartd daemon running on:

host name: dellProxmox
DNS domain: local

The following warning/error was logged by the smartd daemon:

Device: /dev/bus/0 [megaraid_disk_04], Read SMART Self-Test Log Failed

Device info:
[SEAGATE ST1200MM0007 IS06], lu id: 0x5000c5007f09b817, S/N: S3L1607Q, 1.20 TB

For details see host's SYSLOG.

You can also use the smartctl utility for further investigation.
Another message will be sent in 24 hours if the problem persists.

(second email)
This message was generated by the smartd daemon running on:

host name: dellProxmox
DNS domain: local

The following warning/error was logged by the smartd daemon:

Device: /dev/bus/0 [megaraid_disk_02], failed to read SMART values

Device info:
[HGST HUC101212CSS600 U5E0], lu id: 0x5000cca0727c4b70, S/N: L0J6B2SK, 1.20 TB

For details see host's SYSLOG.

You can also use the smartctl utility for further investigation.
Another message will be sent in 24 hours if the problem persists.

So this tells me that the SGDISK is trying to access the SMART values, but cannot. Is there a way to disable the SMART for SGDISK, so I can create the partition(s)?
 
First your "H310 mini" is a raid controller without cache and raid5/raid50 is horrible und you will not happy with pve then in total too which isn't desirable. If you want to use zfs you should look for flashing the H310 to IT mode first and use probaply zfs mirror pool.
If you want to go without zfs and with the H310 only raid10 makes sense or even better buy first a used H710p for just a few bucks.
Further you should decide if you want to run with block storage (lvm/lvm-thin) or file storage (ext4/xfs/btrfs/nfs).
Mostly all here use block storage ... why ever they think and it's the pve default but we (in our company) run file storage as nfs and even local, test here and there again block ... but no, me/we are and stay file storage lover - sorry to all but no problem as it's all functioning each with it's pros and cons. Think a little bit above.
In pve web-ui left on top select datacenter then right storage and then there's add ...
A virtual disk behind a raid ctrl. doesn't show smart values and you need to install Dell (download) perccli tool for that into pve.
OK, I understand that, but I don't (yet) want to convert to IT mode, because that would/may trash my VMWare install that's currently working fine, and in case Proxmox fails, I would like to be able to still load/run VMWare on this server. I'm not sure what would happen, so don't want to chance that if possible. I'm not worried about performance on this, as it's just a test/learning install of Proxmox for me at this point.

Beyond that, you kind of lost me on the block vs file storage and I have not had luck finding a doc saying how to add storage there, as-in what to enter for values when creating storage so that I don't trash what's there, like path/target, /var/lib/vz... should that be different or the same? Is there a doc on that, too?

Also, I know so little that I don't yet know what zfs is...

Thanks.
 
Last edited:
You don't need a disk table or any partitions if you want run xfs (or ext4 which has no reflink support=no) and so shouldn't do, sgdisk even also. Use perccli for list disk health behind a dell raid ctrl.
So stay with the H310 yet as backup to get vmware back if you any want ... then I would start to remove the raid50 and do raid10 of your disks, just define in ctrl. menue before pve is booting and it will create the raidset in background. After pve is up you will again see eg. with lsblk your 2 raid devices for os (pve) and option virtual disk (perhaps sda/sdb may change but that doesn't any matter).
If you don't know zfs I think you even don't work with block storage or did you use block storage in vmware ?
So for beginning take file storage as you know from usage before (win+linux), right ?
Show lsblk when done.
 
I don't know what VMWare uses, as we never had/have to deal with that level from esxi and VCenter. We just had to set up the storage arrays (either HD or using our Pure Storage arrays, connect them, add storage, create volumes and go. Never needed to pay any attention to storage types.

If file storage requires IT mode, I'm not sure I can do that w/o ruining my VMWare, can I?

Meanwhile, I have changed from RAID 50 to RAID 10, as suggested, and will see what happens there. I'll post lsblk when it's booted back up.
 
OK, here's lsblk with the RAID 10 now:
root@dellProxmox:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.3T 0 disk
sdb 8:16 0 476.9G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 475.9G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 3.6G 0 lvm
│ └─pve-data 252:4 0 348.8G 0 lvm
└─pve-data_tdata 252:3 0 348.8G 0 lvm
└─pve-data 252:4 0 348.8G 0 lvm
root@dellProxmox:~#

At this time, if I try to go to 'Disks', it hangs with this:
Could it currently be processing the disks or initializing something?
 
Could it currently be processing the disks or initializing something?
GUI is just running CLI commands. It'd be much easier if you perform your setup from CLI, the errors will be much more evident.

Keep in mind that PVE as a product is based on Ubuntu derived kernel with Debian userland. Proxmox as a company does not test or guarantee particular hardware compatibility, whether it was just released or 10 years old.

There is a reason that VMware keeps to a strict HCL. Proxmox is free to anyone to run on anything, but there is a learning curve to it and compatibility issues that may or may not be overcome.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
GUI is just running CLI commands. It'd be much easier if you perform your setup from CLI, the errors will be much more evident.

Keep in mind that PVE as a product is based on Ubuntu derived kernel with Debian userland. Proxmox as a company does not test or guarantee particular hardware compatibility, whether it was just released or 10 years old.

There is a reason that VMware keeps to a strict HCL. Proxmox is free to anyone to run on anything, but there is a learning curve to it and compatibility issues that may or may not be overcome.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
If I could, I would, but so far, none of them in either place have worked. Trying to partition the drive at the root did not work either, got errors there as well.

The last message I posted was just after booting up Proxmox after changing to RAID 10. I have done nothing in the GUI at that point.

I just tried to run this:
sgdisk -N 1 /dev/sda
from the root console, and get this error:
Problem opening /dev/sda for reading! Error is 6.
 
Last edited:
The "dd" cmd without count=... will run through whole 3.3TB while building the raid1 and slow down everything, kill that with Ctrl-C.

andDid you download perccli (rpm for rhel based distros) from dell and installed and if not do so now please to see what's with the raidset.
You need to install alien first to:
Download perccli rpm into root home, gunzip the gz file, tar xf "file.tar",
cd perccli_7.3-007.0318_linux/Linux (if that's release nr)
apt install alien
alien --to-deb perccli-007.0318.0000.0000-1.noarch.rpm
apt install perccli_007.0318.0000.0000-2_all.deb
cp /opt/MegaRAID/perccli/perccli /usr/local/bin/.
. ~/.bashrc
perccli /call show
 
Last edited:
sda is correct disk, note it's showing 3.3T size, that's all 6 1.2TB drives in RAID 10.

get this when I run 'dd if=/dev/zero of=/dev/sda bs=1M'
dd: failed to open '/dev/sda': No such device or address

get this when I run 'dd if=/dev/sda of=/dev/null bs=1M'
dd: failed to open '/dev/sda': No such device or address

Yet the disks show and worked fine in VMWare (before, as RAID 50), and seem to be fine in the Dell Perc controller.
And they 'appear' in Proxmox as /dev/sda...
 
Something is wrong and my guess its not Ubuntu/Debian.
I'd follow @waltar advice and debug the controller with appropriate tools. But I dont see why you would install RPM/RHEL binary when there is a Debian one in the package: https://www.dell.com/support/home/en-ee/drivers/driversdetails?driverid=36g6n

We also know it used to work in VMware, you mentioned it a few times. However, you've since made changes that could have affected things. Until you are able to access the RAID LUN with basic commands - PVE is not going to work.

@waltar yes, i realize it may be slow, the goal was to see if it starts at all

@rcrosier good luck



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
See what perccli says ...
And a native debian perccli is new for me ... dell is supporting other linux'e now, cool :)
 
The "dd" cmd without count=... will run through whole 3.3TB while building the raid1 and slow down everything, kill that with Ctrl-C.

andDid you download perccli (rpm for rhel based distros) from dell and installed and if not do so now please to see what's with the raidset.
You need to install alien first to:
Download perccli rpm into root home, gunzip the gz file, tar xf "file.tar",
cd perccli_7.3-007.0318_linux/Linux (if that's release nr)
apt install alien
alien --to-deb perccli-007.0318.0000.0000-1.noarch.rpm
apt install perccli_007.0318.0000.0000-2_all.deb
cp /opt/MegaRAID/perccli/perccli /usr/local/bin/.
. ~/.bashrc
perccli /call show

OK, I'm going through this and hit this on the alien -to-deb... command:
root@dellProxmox:/PERCCLI_7.1910.00_A12_Linux# alien --to-deb PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
Warning: Skipping conversion of scripts in package perccli: postinst postrm prerm
Warning: Use the --scripts parameter to include the scripts.
warning: PERCCLI_7.1910.00_A12_Linux/perccli-007.1910.0000.0000-1.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID cb529165: NOKEY
perccli_007.1910.0000.0000-2_all.deb generated

On cp /opt... command, I got:
root@dellProxmox:/PERCCLI_7.1910.00_A12_Linux# cp /opt/MegaRAID/perccli/perccli /usr/local/bin/.
cp: cannot stat '/opt/MegaRAID/perccli/perccli': No such file or directory

So I've stopped there...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!