[SOLVED] Install on RAID-10 (6 nvme SSDs)

Vasilij Lebedinskij

Active Member
Jan 30, 2016
65
3
28
38
Hello! I'm trying to install Proxmox on Raid-10 of 6 SSDs Pci-ex. Proxmox gui installer can't create pool during installation. If i install proxmox on Data SSD and then create ZFS storage from my nvme ssds - everything work fine.

I tried to install Debian Jessie on ZFS Raid-10 root (6 NVME SSDs). I followed the guide but first boot always fail.

```
error: unknown device 2
```
Grub successfully installed on all drives. But grub doesn't boot from my zfs pool. I think that solve the problem with grub in debian installation is easier than wait while proxmox team will check their installer. Can anyone help me with grub?
 
Hello! I'm trying to install Proxmox on Raid-10 of 6 SSDs Pci-ex. Proxmox gui installer can't create pool during installation. If i install proxmox on Data SSD and then create ZFS storage from my nvme ssds - everything work fine.

I tried to install Debian Jessie on ZFS Raid-10 root (6 NVME SSDs). I followed the guide but first boot always fail.

```
error: unknown device 2
```
Grub successfully installed on all drives. But grub doesn't boot from my zfs pool. I think that solve the problem with grub in debian installation is easier than wait while proxmox team will check their installer. Can anyone help me with grub?

first you would need to check whether your bios supports booting from such devices (and if it does, if it supports booting from that many devices - you need at least an importable set of vdevs available!). you can verify this with the "ls" command in the grub shell or grub rescue shell.
 
first you would need to check whether your bios supports booting from such devices (and if it does, if it supports booting from that many devices - you need at least an importable set of vdevs available!). you can verify this with the "ls" command in the grub shell or grub rescue shell.

I've successfully installed proxmox on one drive and it works.

Code:
# cat /boot/grub/device.map
(hd0)    /dev/nvme0n1
(hd1)    /dev/nvme1n1
(hd2)    /dev/nvme2n1
(hd3)    /dev/nvme3n1
(hd4)    /dev/nvme4n1
(hd5)    /dev/nvme5n1
 

Attachments

  • iKVM_capture.jpg
    iKVM_capture.jpg
    85.8 KB · Views: 18
your screenshot would indicate that Grub only sees three boot devices. while this means that you can probably boot from your six disk pool by selecting the "right" three disks (one part of each mirrored pair), that will not be a very stable setup. if you cannot get your bios to pass all six disks to grub, I recommend install PVE to a small non-NVME pool (or non-ZFS disk), and use the big NVME pool just for guest disks.
 
your screenshot would indicate that Grub only sees three boot devices. while this means that you can probably boot from your six disk pool by selecting the "right" three disks (one part of each mirrored pair), that will not be a very stable setup. if you cannot get your bios to pass all six disks to grub, I recommend install PVE to a small non-NVME pool (or non-ZFS disk), and use the big NVME pool just for guest disks.

Thank you! I've recreated my Raid 10 so that grub can see 3 necessary drives and access pool.
 
There is still a little problem left after I installed proxmox over Debian. I created ZFS storage in GUI but I can't create container.

Code:
mounting container failed
TASK ERROR: cannot open directory //rpool: No such file or directory

Something wrong with mount points but I can't figure out how to fix them...

Code:
zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
rpool              10.1G   333G    96K  /
rpool/ROOT         1.18G   333G    96K  none
rpool/ROOT/debian  1.18G   333G  1.18G  /
rpool/home          236K   333G    96K  /home
rpool/home/root     140K   333G   140K  /root
rpool/swap         8.50G   341G    64K  -
rpool/var           457M   333G    96K  /var
rpool/var/cache     454M   333G   454M  /var/cache
rpool/var/log      1.60M   333G  1.60M  /var/log
rpool/var/mail       96K   333G    96K  /var/mail
rpool/var/spool     680K   333G   680K  /var/spool
rpool/var/tmp       128K   333G   128K  /var/tmp
rpool/vmdata         96K   333G    96K  /vmdata
 
PVE requires the mount point of the dataset to be left at its default value. If your storage in PVE uses the dataset "mypool/something/mydataset", PVE expects the dataset to be mounted at "/mypool/something/mydataset".
 
PVE requires the mount point of the dataset to be left at its default value. If your storage in PVE uses the dataset "mypool/something/mydataset", PVE expects the dataset to be mounted at "/mypool/something/mydataset".

And how should i remount it? I tried to delete dataset and storage and create storage without dataset but got same error...
 
just set the "mountpoint" property in ZFS (MOUNTPOINT AND DATASET need to be replaced accordingly):

"zfs set mountpoint=MOUNTPOINT DATASET"

for example, you could use your "rpool/vmdata" dataset for storing VM and container images, managed by PVE:

"zfs set mountpoint=/rpool/vmdata rpool/vmdata"

and then point PVE to this dataset (either via the web interface, or directly in /etc/pve/storage.cfg)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!