I have a new install that runs without rpool. 
I was able to join it to our cluster.
I did not notice this and was able to migrate a few vms onto the new node.
I did notice when I tried to migrate away from it.
In the gui, the local-zfs storage for this node has the gray question mark.
Another observation: SSH into other nodes, there is a colorful user/host/time header
	
	
	
		
But on this node, it is plain text with #
Did I miss an item in the install to choose zfs? This is my 3rd or 4th install, and I do not recall having to do so in the past.
	
	
	
		
Thanks
				
			I was able to join it to our cluster.
I did not notice this and was able to migrate a few vms onto the new node.
I did notice when I tried to migrate away from it.
In the gui, the local-zfs storage for this node has the gray question mark.
Another observation: SSH into other nodes, there is a colorful user/host/time header
		Code:
	
	- user- proxmox1.somename.net - ~ - 12:15:22 EDT
>>>>But on this node, it is plain text with #
Did I miss an item in the install to choose zfs? This is my 3rd or 4th install, and I do not recall having to do so in the past.
		Code:
	
	proxmox16:~# zpool list
no pools available
proxmox16:~# zpool status
no pools available
proxmox16:~# zpool import
no pools available to import
proxmox16:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 223.5G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part /boot/efi
└─sda3               8:3    0   223G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  55.8G  0 lvm  /
  ├─pve-data_tmeta 253:2    0   1.4G  0 lvm
  │ └─pve-data     253:4    0 140.4G  0 lvm
  └─pve-data_tdata 253:3    0 140.4G  0 lvm
    └─pve-data     253:4    0 140.4G  0 lvmThanks
 
	