New install 5.4-15 running without rpool local-zfs question mark

shanonew

Member
Jul 14, 2020
2
0
21
50
I have a new install that runs without rpool.
I was able to join it to our cluster.
I did not notice this and was able to migrate a few vms onto the new node.

I did notice when I tried to migrate away from it.

In the gui, the local-zfs storage for this node has the gray question mark.

Another observation: SSH into other nodes, there is a colorful user/host/time header
Code:
- user- proxmox1.somename.net - ~ - 12:15:22 EDT
>>>>

But on this node, it is plain text with #

Did I miss an item in the install to choose zfs? This is my 3rd or 4th install, and I do not recall having to do so in the past.

Code:
proxmox16:~# zpool list
no pools available
proxmox16:~# zpool status
no pools available
proxmox16:~# zpool import
no pools available to import
proxmox16:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 223.5G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part /boot/efi
└─sda3               8:3    0   223G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  55.8G  0 lvm  /
  ├─pve-data_tmeta 253:2    0   1.4G  0 lvm
  │ └─pve-data     253:4    0 140.4G  0 lvm
  └─pve-data_tdata 253:3    0 140.4G  0 lvm
    └─pve-data     253:4    0 140.4G  0 lvm

Thanks
 
Yes I found the magic Target Harddisk "options" button during install to change from default ext4 to zfs.
(It has been 1.5+ years since doing a raw/fresh install.)

It still took multiple iterations to boot without prox iso rescue grub.

This current install is on a
Dell R840
2 SSD sata attached to the BOSS-S1
no hardware RAID config
Proxmox installer sees 2 disks

Re - why 5.4 - because 12 other nodes are 5.4 and I want to join cluster, migrate vms and retire 3-4 low RAM (9+ yr old) nodes before upgrade to v6+ - my plan is to take those 3-4 and create a test cluster. We have 2 other stand alone prox servers and perform upgrades on those first, but do not have a way to see upgrade effects on a cluster.

Proxmox in Production plug: We have been running Proxmox since 2011; have 12 nodes in cluster

Here are most of the iterations I went through:

1) Proxmox 5.4 iso install (ext4)
SUCCESS (partial) no local-zfs, cluster migration problems

* this was the point at which I opened this thread *

2) Proxmox 6.2 iso install (zfs)
In Dell LifeCycle - OS Deployment (Step 2 of 5) set available operating system = ANY
Boot UEFI mode w/ SATA AHCI
FAIL - Proxmox would only boot after rescue grub

2a) tried STH rootdelay grub fix but this had no effect (servethehome.com fixing-proxmox-ve-cannot-import-rpool-zfs-boot-issue)

3) Proxmox 6.2 iso install (zfs)
SUCCESS - First boot message indicated Prox boot fail, but then it loaded
In Dell LifeCycle - OS Deployment (Step 2 of 5) set available operating system = RHEL 7.7*
Boot UEFI mode w/ SATA AHCI

4) Prox 5.4 iso install (zfs) w/ settings from #3
FAIL - would only boot after rescue grub

5) retry #4 Prox 5.4 (zfs) after BIOS change - no reinstall
SUCCESS
changed boot mode to BIOS & SATA-RAID

Here are the final results:
Code:
proxmox17:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda3    ONLINE       0     0     0
            sdc3    ONLINE       0     0     0

errors: No known data errors

proxmox17:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool   222G   813M   221G         -     0%     0%  1.00x  ONLINE  -


proxmox17:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 223.6G  0 disk
├─sda1   8:1    0  1007K  0 part
├─sda2   8:2    0   512M  0 part
└─sda3   8:3    0 223.1G  0 part
sdc      8:32   0 223.6G  0 disk
├─sdc1   8:33   0  1007K  0 part
├─sdc2   8:34   0   512M  0 part
└─sdc3   8:35   0 223.1G  0 part
sr0     11:0    1  1024M  0 rom