I've got a couple of issues.
The Docs are saying SMART is required now.
I can.....
And everything appears to work okay.
When I try to enable SMART I with the following I get the following errors:
1.) So SMART being enabled is actually required?
2.) Based on that second code snippet, I'm not sure what to do to enable the 8 SAS drives on the RAID controller. I've tried the following with all of the variations "normal", "conservative", "permissive", and "verypermissive", and all of them fail with the following message:
Please advise, since as far as I can glean from the docs, SMART is now "Required".
The second item is, I can apparently use only LVM but not ZFS, as it is prone to errors and data loss if using the stock Hardware based RAID controllers shipped with HP Proliants. Is this so? And I should definitely avoide ZFS when running RAID 10 at the hardware level on these machines?
Here's my setup:
A single RAID 10 array with two disc partitions defined totalling about 10TB, the first partition is 180GB for the OS/Proxmox install
/dev/sda = 180GB, including sda1 (BIOS boot), sda2 (EFI boot), and sdb3 (local and local-lvm)
/dev/sdb = appx 10TB unused - as yet, I have not created any volumes since I would prefer ZFS if that's okay, although the docs indicate that this would lead to data loss, so I'm led to believe that only LVM is acceptable?
Your advice, suggestions, and recommendations are much appreciated
The Docs are saying SMART is required now.
Disk Health Monitoring
Although a robust and redundant storage is recommended, it can be very helpful to monitor the health of your local disks.
Starting with Proxmox VE 4.3, the package smartmontools [2] is installed and required. This is a set of tools to monitor and control the S.M.A.R.T. system for local hard disks.
I can.....
smartctl -a -d cciss,1 /dev/sda
And everything appears to work okay.
When I try to enable SMART I with the following I get the following errors:
smartctl -s on -d cciss,0 /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.4.119-1-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF ENABLE/DISABLE COMMANDS SECTION ===
unable to enable Exception control and warning [Operation not supported]
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
1.) So SMART being enabled is actually required?
2.) Based on that second code snippet, I'm not sure what to do to enable the 8 SAS drives on the RAID controller. I've tried the following with all of the variations "normal", "conservative", "permissive", and "verypermissive", and all of them fail with the following message:
# smartctl -s on --tolerance=permissive -d cciss,1 /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.4.119-1-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF ENABLE/DISABLE COMMANDS SECTION ===
unable to enable Exception control and warning [Operation not supported]
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
Please advise, since as far as I can glean from the docs, SMART is now "Required".
The second item is, I can apparently use only LVM but not ZFS, as it is prone to errors and data loss if using the stock Hardware based RAID controllers shipped with HP Proliants. Is this so? And I should definitely avoide ZFS when running RAID 10 at the hardware level on these machines?
Here's my setup:
A single RAID 10 array with two disc partitions defined totalling about 10TB, the first partition is 180GB for the OS/Proxmox install
/dev/sda = 180GB, including sda1 (BIOS boot), sda2 (EFI boot), and sdb3 (local and local-lvm)
Bootloader
We install two boot loaders by default. The first partition contains the standard GRUB boot loader. The second partition is an EFI System Partition (ESP), which makes it possible to boot on EFI systems.
/dev/sdb = appx 10TB unused - as yet, I have not created any volumes since I would prefer ZFS if that's okay, although the docs indicate that this would lead to data loss, so I'm led to believe that only LVM is acceptable?
Do not use ZFS on top of a hardware RAID controller which has its own cache management. ZFS needs to communicate directly with the disks. An HBA adapter or something like an LSI controller flashed in “IT” mode is more appropriate.
Your advice, suggestions, and recommendations are much appreciated
Last edited: