Old setup will no longer boot

Nov 4, 2014
24
2
3
I've had an Proxmox server running for a while. Now it won't start up properly for some reason.

When booting it shows "Failed to start import ZFS pools by device scanning."

Then it waits for "A start job is running for dev-disk-by\x2duuid-....." and then "Welcome to emergency mode"

By doing journalctl -xb I get the following errors:
Ignoring creation of an alias umountiscsi.service for itself
Cannort import 'rpool': pool already exists
Failed to start import ZFS pools by device scanning
Timed out waiting for device dev-disk-by\x2duui...
Dependency failed for /media/raidhd
Dependency failed for Local File Systems
Dependency failed for File System Check on /dev/disk...

"zpool list" shows my rpool.

The was created by selecting ZFS RAID 10 during installation.

I did try apt dist-upgrade, but it got stuck on "sytemd-sysv-generator[..]: Ignoring creation of an alias umountiscsi.service for itself" before failing with "Job for pvedaemon.service canceled. Welcome to emergency mode!" and "error processing package pve-manager (--configure): subprocess installed post-installation script returned error exit status 1"

At that point it starts asking for a password for maintenance again, but won't accept the password.

Does anyone know whats going on and how to fix it? At least temporary before I upgrade.

proxmox-ve: not correctly installed (running kernel: 4.4.134-1-pve)
pve-manager: not correctly installed (running version: 4.4-24/08ba4d2d)
pve-kernel-4.4.98-2-pve: 4.4.98-101
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.98-3-pve: 4.4.98-103
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.117-2-pve: 4.4.117-110
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.98-5-pve: 4.4.98-105
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.134-1-pve: 4.4.134-112
pve-kernel-4.4.98-6-pve: 4.4.98-107
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.114-1-pve: 4.4.114-108
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+2
libqb0: 1.0.1-1
pve-cluster: 4.0-55
qemu-server: 4.0-115
pve-firmware: 1.1-12
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.1-9~pve4
pve-container: 1.0-106
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.8-2~pve4
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
 
The problem is probably something simple as it's a basic setup that used the Proxmox GUI for setup and installation. Have been searching all day for a solution but I still have no idea whats going on. I know that the system have received updates and worked well until it was rebooted. It has been rebooted before without issues. I've had an similar issue once before where updating using the web GUI broke it, but I think that was fixed, and this time I used "apt update" and "apt dist-upgrade" to update.

I'd really appreciate if someone could point me in the right direction.

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/zvol/rpool/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=76383413-0bdd-412d-a780-2bcfff5a87b7 /media/raidhd ext4 defaults 0 2

Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9535EBA8-1011-4363-AE3E-6CD3B1FBDAAF

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 5860516749 5860514702 2.7T Solaris /usr & Apple ZFS
/dev/sda9 5860516750 5860533134 16385 8M Solaris reserved 1


Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 81547B63-F1C5-4DBA-B268-3AF2EFFAED25

Device Start End Sectors Size Type
/dev/sdb1 34 2047 2014 1007K BIOS boot
/dev/sdb2 2048 5860516749 5860514702 2.7T Solaris /usr & Apple ZFS
/dev/sdb9 5860516750 5860533134 16385 8M Solaris reserved 1


Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 4400ACE5-1E98-D043-9BC9-8B4A7B34F321

Device Start End Sectors Size Type
/dev/sdc1 2048 5860515839 5860513792 2.7T Solaris /usr & Apple ZFS
/dev/sdc9 5860515840 5860532223 16384 8M Solaris reserved 1

Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 6D5A4478-C5B5-6247-BC2A-EFCAA8726CE4

Device Start End Sectors Size Type
/dev/sdd1 2048 5860515839 5860513792 2.7T Solaris /usr & Apple ZFS
/dev/sdd9 5860515840 5860532223 16384 8M Solaris reserved 1

Disk /dev/zd0: 31 GiB, 33285996544 bytes, 65011712 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/zd16: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x0f861020

Device Boot Start End Sectors Size Id Type
/dev/zd16p1 * 2048 999423 997376 487M 83 Linux
/dev/zd16p2 1001470 104855551 103854082 49.5G 5 Extended
/dev/zd16p5 1001472 104855551 103854080 49.5G 8e Linux LVM
 
Last edited:
Hi,

are you able to install the proxmox-ve package?

Code:
apt install proxmox-ve
You should increase root delay.
edit /etc/default/grub and add "rootdelay=10" at GRUB_CMDLINE_LINUX_DEFAULT (i.e. GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=10 quiet") and then issue a # update-grub
more tipps are here

https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks[/Code]
 
I've tried both installing proxmox-ve and pve-manager, but dpkg fails. I've also tried with rootdelay, with no success.

E: dpkg was interrupted, you must manually run 'dpkg --configure -a' to correct the problem
ok..
[...] systemd-sysv-generator[7366]: Ignoring creation of an alias umountiscsi.service for itself
Hangup

pool: rpool
state: ONLINE
scan: scrub repaired 0 in 5h30m with 0 errors on Sun Jul 8 05:54:38 2018
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0

errors: No known data errors

Error from zfs-import-scan.service: cannot import 'rpool': pool already exists

The UUID in /etc/fstab, shouldn't I be able to find it in "fdisk -l"? I have no idea what's going on :/
 
Ok. So I removed /media/raidhd from fstab and disabled zfs-import-scan, and now it works. I have no idea what hdraid is or where it came form or why the pool is already imported when zfs-import-scan tries to import it, but at least it works again now, finally! :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!