Boot problem on ZFS 3 way mirror

John Morrison

Well-Known Member
Feb 26, 2018
37
8
48
Sweden
After setting zfs set xattr=sa dnodesize=auto RPOOL/data ( I read that you shouldnt set this on the RPOOL, so im not sure if this problem is due to this)

I can not boot any longer. I get this:

Attempting Boot From Hard Drive (C:)
error: no such device "SOME HEX NUMBER"
error: unknown filesystem.
grub:#_

So I've went into a liveUSB env (proxmox 5 abort installation command line) and chrooted and imported the rpool.

When I do a grub-update, i get some error like, unknown filesystem there too.

The system is one rpool with boot inside this.

Question is, how do I repair this or should i just send the pool to another external usb disk (i also have the VMs backedup on a SSD) and make OS or boot on an SSD instead.

I would like to keep my configuration of Promox, as I have some custom network config with vswitch etc... I dont want to go through that again, installing vswitch with debs offline.
 
hm - grub has it's own implementation of ZFS, which lacks certain features - seems dnodesize=auto is among them: https://github.com/zfsonlinux/zfs/issues/8538

depending on when you setup the system you could use systemd-boot instead of grub, if your system boots with uefi https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot

you need to have a large enough (meaning at least around 300M ) ESP partition available - then you should be able to use systemd-boot instead of grub.

else I would suggest to setup the system on a new disk - and then send/recv the intial installation

I hope this helps!
 
  • Like
Reactions: John Morrison
hm - grub has it's own implementation of ZFS, which lacks certain features - seems dnodesize=auto is among them: https://github.com/zfsonlinux/zfs/issues/8538

depending on when you setup the system you could use systemd-boot instead of grub, if your system boots with uefi https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysboot

you need to have a large enough (meaning at least around 300M ) ESP partition available - then you should be able to use systemd-boot instead of grub.

else I would suggest to setup the system on a new disk - and then send/recv the intial installation

I hope this helps!

Thanks for replying, I only did the dnodesize on the dataset rpool/data not on the rpool itself. I even went back into the live chroot env and changed it back, but to no avail.

I am using legacy boot, not UEFI.

Thanks, John
 
As said - the issue is due to the activation of the large_dnode feature (you see that 'feature@large_dnode' is active on your pool - see `man zpool-features for an explanation`)

Sadly your installation does not have an ESP on the first 2 disks - thus you cannot simply switch to UEFI+systemd-boot.

You can either:
* remove all filesystems, which contained large dnodes (by send/recv them to a new dataset which does not have the feature enabled) (see the 'large_dnode' section of 'zpool-features(8))
* or reinstall the system anew (maybe with UEFI and systemd-boot) - and send/recv rename the current installation

I hope this helps!
 
  • Like
Reactions: John Morrison
As said - the issue is due to the activation of the large_dnode feature (you see that 'feature@large_dnode' is active on your pool - see `man zpool-features for an explanation`)

Sadly your installation does not have an ESP on the first 2 disks - thus you cannot simply switch to UEFI+systemd-boot.

You can either:
* remove all filesystems, which contained large dnodes (by send/recv them to a new dataset which does not have the feature enabled) (see the 'large_dnode' section of 'zpool-features(8))
* or reinstall the system anew (maybe with UEFI and systemd-boot) - and send/recv rename the current installation

I hope this helps!
Thank you
 
I have reinstalled on an SSD with xfs and EFI boot via a USB disk as the SSD couldnt boot alone! The SSD is an M2 on a 4 way PCI card.

I have imported the ZFS datasets. removed the LVM data pool from this new install.

Can I just add the ZFS pool rpool/data to storage?

Whats the best way to get back with almost the same config as before? rsync some parts from rpool/ROOT/pve-1 ?
 
I have imported the ZFS datasets. removed the LVM data pool from this new install.
How did you import them?
please post some output that explains the current state of your system:
*`lsblk`
* `vgs -a ; pvs -a; lvs -a`
* `zpool status; zpool list; zfs list`

Since I'm not sure I can guess what your system currently looks like

Whats the best way to get back with almost the same config as before?
PVE's config is for the greatest part the files in /etc/pve/ (you need a running pmxcfs to have access to them - see https://pve.proxmox.com/pve-docs/chapter-pmxcfs.html - you could also copy the sqlite db and start that way), apart from that all config files you'd move in a regular linux system
(network configuration, /etc/hosts, custom cronjobs, services etc.)

I hope that helps!
 
  • Like
Reactions: John Morrison
How did you import them?
please post some output that explains the current state of your system:
*`lsblk`
* `vgs -a ; pvs -a; lvs -a`
* `zpool status; zpool list; zfs list`

Since I'm not sure I can guess what your system currently looks like

I have uploaded three files.


PVE's config is for the greatest part the files in /etc/pve/ (you need a running pmxcfs to have access to them - see https://pve.proxmox.com/pve-docs/chapter-pmxcfs.html - you could also copy the sqlite db and start that way), apart from that all config files you'd move in a regular linux system
(network configuration, /etc/hosts, custom cronjobs, services etc.)

I hope that helps!

Thanks for replying Stoiko.
 

Attachments

its funny when I mount rpool/ROOT/pve-1

ls
SSD TEST_VOL_DOCKER bin boot core dev etc home lib lib64 media mnt opt proc root rpool run sbin srv sys tmp usr var

root@proxmox001-man:/mnt/rpool/ROOT/pve-1# cd etc/pve/
root@proxmox001-man:/mnt/rpool/ROOT/pve-1/etc/pve# ls
root@proxmox001-man:/mnt/rpool/ROOT/pve-1/etc/pve#

there is noting in there!

root@proxmox001-man:/mnt/rpool/ROOT/pve-1/etc# ls
PackageKit bash.bashrc ceph dbus-1 ethertypes gshadow hosts.deny iscsi letsencrypt logrotate.d manpath.config netconfig passwd pulse rc3.d resolvconf sensors.d ssl sysstat vim
X11 bash_completion cifs-utils debconf.conf fdmount.conf gshadow- idmapd.conf issue libaudit.conf lvm mediaprm network passwd- pve rc4.d rmt sensors3.conf staff-group-for-usr-local systemd vzdump.conf
adduser.conf bash_completion.d console-setup debian_version fonts gss init issue.net libnl-3 lxc mime.types networks perl python rc5.d rpc services subgid terminfo wgetrc
aliases bindresvport.blacklist corosync default fstab gssapi_mech.conf init.d kernel local lynx mke2fs.conf newt pm python2.7 rc6.d rsyslog.conf shadow subgid- timezone xdg
aliases.db binfmt.d cron.d deluser.conf fuse.conf hdparm.conf initramfs-tools ksmtuned.conf locale.alias machine-id modprobe.d nsswitch.conf polkit-1 python3 rcS.d rsyslog.d shadow- subuid tmpfiles.d zfs
alternatives byobu cron.daily dhcp gai.conf host.conf inputrc kvm locale.gen magic modules openvswitch postfix python3.5 redhat-release samba shells subuid- ucf.conf
apm ca-certificates cron.hourly dkms groff hostid insserv ld.so.cache localtime magic.mime modules-load.d opt ppp rc.d reportbug.conf screenrc skel sudoers udev
apparmor ca-certificates.conf cron.monthly docker group hostname insserv.conf ld.so.conf logcheck mail.rc motd os-release profile rc0.d request-key.conf securetty smartd.conf sudoers.d ufw
apparmor.d ca-certificates.conf.dpkg-old cron.weekly dpkg group- hosts insserv.conf.d ld.so.conf.d login.defs mailcap mtab pam.conf profile.d rc1.d request-key.d security smartmontools sysctl.conf update-motd.d
apt calendar crontab environment grub.d hosts.allow iproute2 ldap logrotate.conf mailcap.order nanorc pam.d protocols rc2.d resolv.conf selinux ssh sysctl.d updatedb.conf

But etc is looking fine, but nothing in pve!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!