RAID problem.

Anandir

New Member
Aug 8, 2023
4
0
1
Hi everyone.
I'm installing Proxmox on my production server (I'm looking for a replacement for RHEL+oVirt), but I end up in a problem
I've a ThinkSystem ST250 with a RAID controller ThinkSystem RAID 530-8i PCIe 12Gb Adapter with two 1TB disks in RAID 1.
The system installs fine on the server's main SSD (240Gb disk) and it runs ok, but it doesn't "see" the RAID,
Well it sees the /dev/sdb device, but I can't create an fs in it (I've tried the mkfs.ext4 command) nor can I alter the partition table with cfdisk.
Basically, Proxmox uses only the internal SSD for everything.
Reading here and there on the forum, it seems that the solution is to break the RAID as it is and use the software RAID 1, with XFS (or BTFS), provided by Proxmox.
But I'm not very happy with this solution. I prefer to stick with the hardware RAID as it is now.

Do you have any ideas?

Thanks a lot for your time.

Best regards
Giacomo
 
It should be possible to install Proxmox using ext4 or xfs on a hardware raid controller
Can I see the output of pveversion -v lsblk and also the error you are getting?
 
It should be possible to install Proxmox using ext4 or xfs on a hardware raid controller
Can I see the output of pveversion -v lsblk and also the error you are getting?

First of all, thanks for your kind reply.
I've formatted all the disks (SSD + RAID) and I've reinstalled Proxmox and now it seems to work:

Code:
root@proxima:~# pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.2
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.5
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.5
pve-cluster: 8.0.1
pve-container: 5.0.3
pve-docs: 8.0.3
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.4
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
root@proxima:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0 223.6G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0     1G  0 part /boot/efi
└─sda3               8:3    0 222.6G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  65.6G  0 lvm  /
  ├─pve-data_tmeta 253:2    0   1.3G  0 lvm
  │ └─pve-data     253:4    0 130.3G  0 lvm
  └─pve-data_tdata 253:3    0 130.3G  0 lvm
    └─pve-data     253:4    0 130.3G  0 lvm
sdb                  8:16   0 837.3G  0 disk
└─sdb1               8:17   0 837.3G  0 part

I'm a bit confused... But if it is working now, I don't complain.
Now I've only to attach the RAID to the pool and I'm done.
But still, I'm a bit confused.
Before, basically, I was just able to see the /dev/sdb device without the partition and I wasn't able to create the partition...
 
Last edited:
But still, I'm a bit confused.
Before, basically, I was just able to see the /dev/sdb device without the partition and I wasn't able to create the partition...
Hardware RAID controller are basically a black box and also sometimes a bit buggy. Also if they fail, it might be hard to recover. That's why we recommend ZFS.

EDIT: I want to clarify. With software raid I ment specifically recommend using the ZFS RAID feature for building a RAID
 
Last edited:
You don't need a partition, just a filesystem if you want to have a filesystem. LVM would also be OK.

Basically, I just want to "move" the pve/data out of the SSD and map it over the RAID (/dev/sdb) and reclaim the space for the pve-root.
I need to learn a bit about the LVM (something that I've never really loved) and it should be pretty easy.

Hardware RAID controller are basically a black box and also sometimes a bit buggy. Also if they fail, it might be hard to recover. That's why we recommend software raid controller.

I know what you mean, but I have it and I would like to use it after several years of software RAID. I do daily backups of the VMs so, basically, I should be "safe" over this aspect.

But this should be another option for sure.
 
Just out of curiosity, is your HW controller that much more powerful than a SW raid to want to use it?
 
Just out of curiosity, is your HW controller that much more powerful than a SW raid to want to use it?

Well, in theory, you just offload work from the CPU and the OS to a dedicated device.
So the CPU has a little bit less work.

That's the basic idea.
 
I do daily backups of the VMs so, basically, I should be "safe" over this aspect.
But you will never know if all your backups actually contain some corrupted data, as you are then probably missing the blocklevel-checksumming of a software raid like ZFS. So yes, you are some kind of safe, as you know you got a copy of your data. You will just not know about the data integrity of that data. ;)
 
Last edited:
In my experience, a HW raid controller does not bring any measurable performance benefits. Modern CPUs deal with the possible load in passing. Since I don't deal with environments where hwcontroller might have advantages, I'm glad to have disposed of them. I can't find any disadvantages since 10 years. Rather advantages because there are fewer spare parts on the shelf.
 
Could be useful for sync writes in case you got cache + BBU. But yes, without BBU and cache not that great.
 
Of course, the additional battery of a raid controller is anything but stupid. With a decent backup and today's common speeds (<1s) this is negligible in my opinion. But when does that happen? I maintain a UPS in front of important systems is enough. I even activate the cache.
 
I even activate the cache.
But does the disk also cache sync writes? Usually the disks firmware will refuse to cache sync writes and will only cache async writes if the disk doesn`t got it's own backup battery (like all enterprise SSDs do for that reason, but no consumer SSD). Thats why any decade old SATA enterprise SSD will rip a modern PCIe 4.0 NVMe consumer SSD when it comes to sync writes. The SSDs firmware doesn't know about your UPS, so it won't cache data that shouldn't be lost under no circumstances. Here a HW raid with BBU + cache comes handy, as it can cache those sync writes before they hit the disks, so sync writing to HDDs or consumer grade SSDs should be magnitudes faster.
 
Last edited:
Well, if you ignore the fact that your system doesn't have a BBU and you still activate appropriate caching mechanisms in the swraid, what do you gain ideally?
10% or 100%?
I claim the cache mechanism brings a maximum of 5%.
 
In the case of sync writes it's a differnece between some hundreds vs some thousands or even tenthousands of IOPS.
Use a consumer NVMe with async writes that it can cache and you can get 1 mio IOPS. Do sync writes and it drops down to something like 400 IOPS.
Wouldn't be such a big drop with a HW raid with BBU or enterprise SSD that got a PLP, as sync writes then would be cached in DRAM too.
You could of cause use something cachemode=unsafe in PVE or sync=off with ZFS which will handle all slow and secure sync writes as fast and unsafe async writes but then even if your UPS keeps powering the system, a kernel crash could cause those important writes t be lost.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!