- Feb 16, 2018
Long story short, I decided to upgrade from version 6 to version 7, and only then found out that having root in ZFS mirror will be a problem with the latest ZFS version. I run proxmox-boot-tool on the available 512M partition and then found out that one of the SSDs of the mirror has probably died despite being less than few months old (yeah, it is time to setup some monitoring on that after fixing this), thus managed to initialize sdh2 partition but not sdi2. My question now is, isn't that enough for it to boot or should I rollback, fix the mirror (I have no idea which of the two is the faulty SSD so I have to experiment) and then re-run the upgrade?
root@pve4:~# findmnt / TARGET SOURCE FSTYPE OPTIONS / rpool/ROOT/pve-1 zfs rw,relatime,xattr,noacl root@pve4:~# ls /sys/firmware/efi ls: cannot access '/sys/firmware/efi': No such file or directory root@pve4:~# lsblk -o +FSTYPE NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT FSTYPE sdh 8:112 0 119.2G 0 disk |-sdh1 8:113 0 1007K 0 part |-sdh2 8:114 0 512M 0 part vfat `-sdh3 8:115 0 31.5G 0 part zfs_member sdi 8:128 0 119.2G 0 disk |-sdi1 8:129 0 1007K 0 part |-sdi2 8:130 0 512M 0 part vfat `-sdi3 8:131 0 31.5G 0 part zfs_member root@pve4:~# pveversion pve-manager/7.0-13/7aa7e488 (running kernel: 5.4.124-1-pve) root@pve4:~# proxmox-boot-tool status Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. System currently booted with legacy bios F9AB-34BA is configured with: uefi (versions: 5.4.103-1-pve, 5.4.98-1-pve), grub (versions: 5.11.22-5-pve, 5.4.124-1-pve, 5.4.140-1-pve) mount: /var/tmp/espmounts/F9AB-7A19: can't read superblock on /dev/sdi2. mount of /dev/disk/by-uuid/F9AB-7A19 failed - skipping root@pve4:~#