MDADM create with NVMe failed - Floating point exception

sdssdice

Member
Nov 24, 2015
14
4
21
Good day everybody,

we would like to subsequently set up a software raid 1 on our Proxmox system with mdadm.
Proxmox itself is currently installed on an NVMe with ext4, which is now to be mirrored on another NVMe.

However, the following error occurs.
"Floating point exception"

root@pr-001:~# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/nvme0n1p1 /dev/nvme1n1p1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata Floating point exception

I have already done a wipe on the second NVMe and taken over the partition table again.
Code:
sfdisk --dump /dev/nvme0n1 | sfdisk /dev/nvme1n1

Versions:
Proxmox: 7.1-7
mdadm: 4.1


Unfortunately I couldn't find anything concrete about this, can someone help with this problem?
 
  • Like
Reactions: melroy89
Are you currently booting from /dev/nvme0n1p1? If so, you cannot do that with 1.2 metadata, as it is described in the output.

Normally you install with a single-disk mdadm raid and expand it afterwards. Metadata 1.2 has other data "in between", so that a simple "after-install-change" to mdadm is not possible anymore.

Why do you use mdadm over ZFS?
 
thanks @LnxBil for your answer.

yes, we use /dev/nvme0n1p1 as boot device. For the Raid 1 setup, according to the output of mdadm, would metadata 0.9 be a big disadvantage for us in comparison?

ext4 is required for our use case, unfortunately it is not possible to create a raid under ext4 during the installation.
 
  • Like
Reactions: melroy89
What do you mean by disadvantage? You cannot use anything else than 0.9.
I meant generally in comparison 0.9 to 1.2
Yes, this is an unsupported setup. What do you need ext4 for?
We would like to run a Ceph monitor on the Proxmox system. As far as we know, ext4 is best suited for this. ZFS also needs more RAM in comparison. The important thing here is that the Ceph monitor is also on the Proxmox partition. Do you have a good idea for this use-case?
 
We would like to run a Ceph monitor on the Proxmox system. As far as we know, ext4 is best suited for this. ZFS also needs more RAM in comparison. The important thing here is that the Ceph monitor is also on the Proxmox partition. Do you have a good idea for this use-case?
You have at least three nodes with this setup? Do you want to use the NVMe also as OSD? Could you please describe what do you plan to deploy (nodes, #OSD etc).
 
You have at least three nodes with this setup? Do you want to use the NVMe also as OSD? Could you please describe what do you plan to deploy (nodes, #OSD etc).

Yes, more than 3. These two NVMs drives are only for OS and Ceph MON, we have enough other OSDs separately.
 
Yes, more than 3. These two NVMs drives are only for OS and Ceph MON, we have enough other OSDs separately.
If you go with PVE and want to have support, I suggest using ZFS, which is the only viable and supported option. ZFS itself is much more versatile than ext4 and you can put your monitor on its own dataset. The caching in RAM depends on the size of the data and how much it is read. In general, you can set an upper limit to the memory ZFS uses, this is e.g. not possible with ext4. You can even configure your ZFS dataset with the monitor to only cache metadata and not data itself making it using even less ram. But caching stuff is always good, so that is not advised.
 
ZFS is killing my ssd disks. I only want to run Proxmox on my raid 1 (mirror). Storing some templates and ISO's, nothing more!
I use other disks for my VMs, so I want to use Mdraid on the Proxmox disk.

How to fix the "Floating point exception" on the BIOS boot partition (containg the core.img)?

EDIT: Putting a software RAID on the both boot partition is I think not really supported. Hence the error.
Regarding the "BIOS boot" partition I notice they also don't setup a RAID for this partition: https://debian-handbook.info/browse/da-DK/stable/advanced-administration.html#sect.raid-or-lvm

Thus I will continue only creating a mdraid array on sdb for the 3rd partition that actually stores the data:

Bash:
mdadm --create -l 1 -n 2 /dev/md0 missing /dev/sdb3

I used "missing" for now during the creation, later I will add sda to the array. Anyway, the other two partitions can be easily converted to a RAID array.

Loosely following this guide: https://www.dlford.io/install-proxmox-on-raid-array/ And this:
https://www.petercarrero.com/content/2012/04/22/adding-software-raid-proxmox-ve-20-install

Ps. Regarding the 2nd partition (EFI boot partition), you could try to put in the raid, but it's not straightforward: https://askubuntu.com/questions/66637/can-the-efi-system-partition-be-raided

So in case of the first two partitions I will do only a clone for now:
Bash:
dd if=/dev/sda1 of=/dev/sdb1
dd if=/dev/sda2 of=/dev/sdb2

If needed, you can revert-back to the original Linux partition types, in case you already tried to make a RAID array out of those first two partitions:
Bash:
sfdisk --part-type /dev/sdb 1 21686148-6449-6E6F-744E-656564454649
sfdisk --part-type /dev/sdb 2 C12A7328-F81F-11D2-BA4B-00A0C93EC93B

More info:
Current layout of Proxmox disk (sda):
Bash:
/dev/sda1       34      2047      2014  1007K BIOS boot
/dev/sda2     2048   2099199   2097152     1G EFI System
/dev/sda3  2099200 488397134 486297935 231.9G Linux LVM (will soon be part of the Linux RAID as well..)

Preparing for MDRaid setup on another disk (sdb):

Ps. I also gave this partiantion a new UUID/Serial number, install "mtools" and "uuid-runtime" packages, then execute:
Code:
mlabel -N `uuidgen | head -c8` -i  /dev/sdb2 ::



Bash:
/dev/sdb1       34      2047      2014  1007K BIOS boot (one time cloned)
/dev/sdb2     2048   2099199   2097152     1G EFI System (one time cloned)
/dev/sdb3  2099200 488397134 486297935 231.9G Linux RAID

And finally the END result:

Bash:
md2 : active raid1 sda3[2] sdb3[1]
      243016832 blocks super 1.2 [2/2] [UU]
      bitmap: 1/2 pages [4KB], 65536KB chunk


Bash:
Disk /dev/sda: 232.89 GiB, 250059350016 bytes, 488397168 sectors
Device       Start       End   Sectors   Size Type
/dev/sda1       34      2047      2014  1007K BIOS boot
/dev/sda2     2048   2099199   2097152     1G EFI System
/dev/sda3  2099200 488397134 486297935 231.9G Linux RAID

Disk /dev/sdb: 232.89 GiB, 250059350016 bytes, 488397168 sectors
Disk model: blabalbla..

Device       Start       End   Sectors   Size Type
/dev/sdb1       34      2047      2014  1007K BIOS boot
/dev/sdb2     2048   2099199   2097152     1G EFI System
/dev/sdb3  2099200 488397134 486297935 231.9G Linux RAID

I mainly followed the following guide: https://www.dlford.io/install-proxmox-on-raid-array/
 
Last edited:
Buy enterprise SSDs ...

Yea I know. But I don't need a ZFS cluster for just running the Proxmox server without VM storage (I removed the LVM partition, local-lvm, completely after Proxmox installation). Again ZFS would be really overkill.
I only run Proxmox, store ISO images and templates on this disk.

For VM storage (and snapshots) I use other disks with Thin LVM. Works great for my home setup!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!