Upgrade to 6.0 went wrong.

billbillw

Member
Mar 30, 2018
22
0
21
53
Hello,
After a long time waiting, I finally tried to do an inplace upgrade from Proxmox 5.4 to 6.0. Why not, I have some time now that we are social distanced and under stay at home orders.

I followed the guide here: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0.
I did all my updates, rebooted, checked all the repositories, turned off my containers and proceeded.

Things were going along until it hit a stop asking me if I wanted to remove proxmox-ve. I followed some commands that were on the putty screen and then it went on for some time (about 20 minutes). Then it finished. I rebooted, and I have no web interface. My linux containers have not started.
I try to check the status of pveproxy.service and it says it is Masked.
I can see my ZFS pool, it is healthy.

I have no subscription and used these repositories:
deb http://ftp.debian.org/debian buster main contrib
deb http://ftp.debian.org/debian buster-updates main contrib

# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve buster pve-no-subscription

# security updates
deb http://security.debian.org buster/updates main contrib

I'm not exactly a Linux expert. I followed some tutorials to set this up about 2 years ago. It is just a single server with a 4-disc zfs pool, and an SSD running Proxmox with two linux containers. One is ubuntu, which mainly runs my Plex server, and the other is a turnkey file server.

Any help would be appreciated.
Thanks,
Bill
 
My install looks like this:
root@billx2:~# dpkg -l | grep pve
ii libcfg6:amd64 2.4.4-pve1 amd64 cluster engine CFG library
ii liblvm2app2.2:amd64 2.02.168-pve6 amd64 LVM2 application library
ii liblvm2cmd2.02:amd64 2.02.168-pve6 amd64 LVM2 command library
ii libnvpair1linux 0.7.13-pve1~bpo2 amd64 Solaris name-value library for Linux
ii libpve-common-perl 5.0-56 all Proxmox VE base library
ii libpve-http-server-perl 2.0-14 all Proxmox Asynchrounous HTTP Server Implementation
ii libtotem-pg5:amd64 2.4.4-pve1 amd64 cluster engine Totem library
ii libuutil1linux 0.7.13-pve1~bpo2 amd64 Solaris userland utility library for Linux
ii libzfs2linux 0.7.13-pve1~bpo2 amd64 OpenZFS filesystem library for Linux
ii libzpool2linux 0.7.13-pve1~bpo2 amd64 OpenZFS pool library for Linux
ii lxc-pve 3.1.0-7 amd64 Linux containers userspace tools
ii lxcfs 3.0.3-pve1 amd64 LXC userspace filesystem
ii novnc-pve 1.0.0-3 amd64 HTML5 VNC client
rc pve-cluster 5.0-38 amd64 Cluster Infrastructure for Proxmox Virtual Environment
rc pve-container 2.0-41 all Proxmox VE Container management tool
ii pve-docs 5.4-2 all Proxmox VE Documentation
rc pve-firewall 3.0-22 amd64 Proxmox VE Firewall
ii pve-firmware 2.0-7 all Binary firmware code for the pve-kernel
rc pve-ha-manager 2.0-9 amd64 Proxmox VE HA Manager
ii pve-kernel-4.13 5.2-2 all Latest Proxmox VE Kernel Image
ii pve-kernel-4.13.13-2-pve 4.13.13-33 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.13.13-5-pve 4.13.13-38 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.13.16-1-pve 4.13.16-46 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.13.16-2-pve 4.13.16-48 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.13.16-3-pve 4.13.16-50 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.13.16-4-pve 4.13.16-51 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15 5.4-16 all Latest Proxmox VE Kernel Image
ii pve-kernel-4.15.15-1-pve 4.15.15-6 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.17-1-pve 4.15.17-9 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.17-2-pve 4.15.17-10 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-1-pve 4.15.18-19 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-11-pve 4.15.18-34 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-12-pve 4.15.18-36 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-13-pve 4.15.18-37 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-14-pve 4.15.18-39 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-15-pve 4.15.18-40 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-17-pve 4.15.18-43 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-18-pve 4.15.18-44 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-20-pve 4.15.18-46 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-21-pve 4.15.18-48 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-24-pve 4.15.18-52 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-26-pve 4.15.18-54 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-27-pve 4.15.18-55 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-7-pve 4.15.18-27 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-9-pve 4.15.18-30 amd64 The Proxmox PVE Kernel Image
ii pve-libspice-server1 0.14.1-2 amd64 SPICE remote display system server library
rc pve-manager 5.4-13 amd64 Proxmox Virtual Environment Management Tools
ii pve-qemu-kvm 3.0.1-4 amd64 Full virtualization on x86 hardware
ii pve-xtermjs 3.12.0-1 amd64 HTML/JS Shell client
ii spl 0.7.13-pve1~bpo2 amd64 Solaris Porting Layer user-space utilities for Linux
ii zfs-initramfs 0.7.13-pve1~bpo2 all OpenZFS root filesystem capabilities for Linux - initramfs
ii zfs-zed 0.7.13-pve1~bpo2 amd64 OpenZFS Event Daemon
ii zfsutils-linux 0.7.13-pve1~bpo2 amd64 command-line tools to manage OpenZFS filesystems
 
root@billx2:~# apt install proxmox-ve
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package proxmox-ve
 
seems you do not have access to the repositories?

Please run:

> apt update
> apt install proxmox-ve
 
Trying the process on the 'see also' link you posted. It seems to be working furiously after I did the apt full upgrade.

But then I get this again:

GRUB failed to install to the following devices: │
│ │
│ /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde3 /dev/dm-0 │
│ │
│ Do you want to continue anyway? If you do, your computer may not start up properly. │
│ │
│ Writing GRUB to boot device failed - continue? │
│ │
│ <Yes> <No>


I had that previously, I selected all discs to be safe and it keeps looping back. The only way to get out is to say continue >Yes
 
seems you do not have access to the repositories?

Please run:

> apt update
> apt install proxmox-ve

I think maybe because I didn't have the repository key. That step is not shown in the page here: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

I suppose it might help if that page was modified to show that. The page you linked is definitely more helpful to a no-subscription user like me. Waiting for a reboot to see if things were fixed.
 
Well, it seem to be somewhat fixed. I can get GUI and my containers are running, but for some reason, my ZFS pool files are not showing. The ZFS Pool shows that 4.5TB is allocated, but the files are not showing up. I'll do some more digging and see if I can figure out what's going on there.
 
root@billx2:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
storage 3.31T 7.19T 34.4K /storage
storage/share 3.31T 7.19T 41.9K /storage/share
storage/share/Files 9.01G 7.19T 9.01G /storage/share/Files
storage/share/Music 481G 7.19T 481G /storage/share/Music
storage/share/Photos 231G 7.19T 231G /storage/share/Photos
storage/share/Video 2.61T 7.19T 2.61T /storage/share/Video
storage/share/downloads 32.9K 1000G 32.9K /storage/share/downloads
storage/share/iso 1.67G 7.19T 1.67G /storage/share/iso
storage/vmstorage 65.8K 7.19T 32.9K /storage/vmstorage
storage/vmstorage/limited 32.9K 1000G 32.9K /storage/vmstorage/limited

Clearly I have data on my storage, but it is not being seen by my containers. Is there a good source for troubleshooting ZFS issues? Again, I followed a tutorial when I set this up and I am struggling.
 
I can't seem to figure this out. Everything I know how to check seems to be setup properly, but neither my Turnkey File server, nor my Ubuntu container sees any of the files. When I check the containers in the Proxmox GUI, it shows the mountpoints in the Resources tab. Some output from Putty below:

root@billx2:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
storage 14.5T 4.43T 10.1T - - 0% 30% 1.00x ONLINE -
raidz1 14.5T 4.43T 10.1T - - 0% 30.5% - ONLINE
scsi-35000cca03b5b6754 - - - - - - - - ONLINE
scsi-35000cca03bccc54c - - - - - - - - ONLINE
scsi-35000cca05c07ee5c - - - - - - - - ONLINE
scsi-35000cca05c0983b0 - - - - - - - - ONLINE
root@billx2:~# zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
storage 4.43T 10.1T 0 0 564 707
root@billx2:~# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
-------------------------- ----- ----- ----- ----- ----- -----
storage 4.43T 10.1T 0 0 564 706
raidz1 4.43T 10.1T 0 0 564 706
scsi-35000cca03b5b6754 - - 0 0 152 180
scsi-35000cca03bccc54c - - 0 0 129 175
scsi-35000cca05c07ee5c - - 0 0 145 178
scsi-35000cca05c0983b0 - - 0 0 137 172
-------------------------- ----- ----- ----- ----- ----- -----
root@billx2:~# pvesm zfsscan
storage
storage/share
storage/share/Files
storage/share/Music
storage/share/Photos
storage/share/Video
storage/share/downloads
storage/share/iso
storage/vmstorage
storage/vmstorage/limited
root@billx2:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.7T 0 disk
├─sda1 8:1 0 3.7T 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 3.7T 0 disk
├─sdb1 8:17 0 3.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 3.7T 0 disk
├─sdc1 8:33 0 3.7T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 3.7T 0 disk
├─sdd1 8:49 0 3.7T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 119.2G 0 disk
├─sde1 8:65 0 1M 0 part
├─sde2 8:66 0 256M 0 part
└─sde3 8:67 0 119G 0 part
├─pve-root 253:0 0 29.5G 0 lvm /
├─pve-swap 253:1 0 8G 0 lvm [SWAP]
├─pve-data_tmeta 253:2 0 68M 0 lvm
│ └─pve-data-tpool 253:4 0 66.8G 0 lvm
│ ├─pve-data 253:5 0 66.8G 0 lvm
│ ├─pve-vm--101--disk--1 253:6 0 8G 0 lvm
│ └─pve-vm--100--disk--1 253:7 0 8G 0 lvm
└─pve-data_tdata 253:3 0 66.8G 0 lvm
└─pve-data-tpool 253:4 0 66.8G 0 lvm
├─pve-data 253:5 0 66.8G 0 lvm
├─pve-vm--101--disk--1 253:6 0 8G 0 lvm
└─pve-vm--100--disk--1 253:7 0 8G 0 lvm
sr0 11:0 1 1024M 0 rom
root@billx2:~# zpool status
pool: storage
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 0 days 02:57:30 with 0 errors on Sun Mar 8 04:21:32 2020
config:

NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
scsi-35000cca03b5b6754 ONLINE 0 0 0
scsi-35000cca03bccc54c ONLINE 0 0 0
scsi-35000cca05c07ee5c ONLINE 0 0 0
scsi-35000cca05c0983b0 ONLINE 0 0 0

errors: No known data errors
root@billx2:~#
 
After running a few more commands, it appears that the ZFS Pool is not actually mounting.

root@billx2:~# zfs get mounted
NAME PROPERTY VALUE SOURCE
storage mounted no -
storage/share mounted no -
storage/share/Files mounted no -
storage/share/Music mounted no -
storage/share/Photos mounted no -
storage/share/Video mounted no -
storage/share/downloads mounted no -
storage/share/iso mounted no -
storage/vmstorage mounted no -
storage/vmstorage/limited mounted no -
root@billx2:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2020-04-01 19:02:24 EDT; 4h 11min ago
Docs: man:zfs(8)
Process: 2045 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
Main PID: 2045 (code=exited, status=1/FAILURE)

Apr 01 19:02:24 billx2 systemd[1]: Starting Mount ZFS filesystems...
Apr 01 19:02:24 billx2 zfs[2045]: cannot mount '/storage': directory is not empty
Apr 01 19:02:24 billx2 systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 19:02:24 billx2 systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Apr 01 19:02:24 billx2 systemd[1]: Failed to start Mount ZFS filesystems.


I think it is a similar problem that was reported here: https://www.reddit.com/r/Proxmox/comments/cj9wp8/zfs_pool_no_longer_mounts_after_upgrade_54_to_60/

So I made the change as discussed in the reddit thread above to ExecStart=/sbin/zfs mount -O -a

After a reboot I get this instead:

root@billx2:~# zfs get mounted
NAME PROPERTY VALUE SOURCE
storage mounted yes -
storage/share mounted yes -
storage/share/Files mounted yes -
storage/share/Music mounted yes -
storage/share/Photos mounted yes -
storage/share/Video mounted yes -
storage/share/downloads mounted yes -
storage/share/iso mounted yes -
storage/vmstorage mounted yes -
storage/vmstorage/limited mounted yes -

Everything seems to be working now, but I'm afraid it will be a problem again with updates/reboots. The Reddit poster seemed to have continuing problems.

I'm not sure what changed and why this is a problem now. The configuration above was working for several versions of Proxmox 5 previously. I don't know what if anything is getting written into the storage root directory. (Edit: Just checked and I can't see any files in Storage/)
 
Last edited:
Well, it happened again after doing some updates and rebooting today. My ZFS pool would not mount. I sure would like a solution to this other than having to edit the zfs-mount.service every time it happens.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!