I've had an Proxmox server running for a while. Now it won't start up properly for some reason.
When booting it shows "Failed to start import ZFS pools by device scanning."
Then it waits for "A start job is running for dev-disk-by\x2duuid-....." and then "Welcome to emergency mode"
By doing journalctl -xb I get the following errors:
Ignoring creation of an alias umountiscsi.service for itself
Cannort import 'rpool': pool already exists
Failed to start import ZFS pools by device scanning
Timed out waiting for device dev-disk-by\x2duui...
Dependency failed for /media/raidhd
Dependency failed for Local File Systems
Dependency failed for File System Check on /dev/disk...
"zpool list" shows my rpool.
The was created by selecting ZFS RAID 10 during installation.
I did try apt dist-upgrade, but it got stuck on "sytemd-sysv-generator[..]: Ignoring creation of an alias umountiscsi.service for itself" before failing with "Job for pvedaemon.service canceled. Welcome to emergency mode!" and "error processing package pve-manager (--configure): subprocess installed post-installation script returned error exit status 1"
At that point it starts asking for a password for maintenance again, but won't accept the password.
Does anyone know whats going on and how to fix it? At least temporary before I upgrade.
When booting it shows "Failed to start import ZFS pools by device scanning."
Then it waits for "A start job is running for dev-disk-by\x2duuid-....." and then "Welcome to emergency mode"
By doing journalctl -xb I get the following errors:
Ignoring creation of an alias umountiscsi.service for itself
Cannort import 'rpool': pool already exists
Failed to start import ZFS pools by device scanning
Timed out waiting for device dev-disk-by\x2duui...
Dependency failed for /media/raidhd
Dependency failed for Local File Systems
Dependency failed for File System Check on /dev/disk...
"zpool list" shows my rpool.
The was created by selecting ZFS RAID 10 during installation.
I did try apt dist-upgrade, but it got stuck on "sytemd-sysv-generator[..]: Ignoring creation of an alias umountiscsi.service for itself" before failing with "Job for pvedaemon.service canceled. Welcome to emergency mode!" and "error processing package pve-manager (--configure): subprocess installed post-installation script returned error exit status 1"
At that point it starts asking for a password for maintenance again, but won't accept the password.
Does anyone know whats going on and how to fix it? At least temporary before I upgrade.
proxmox-ve: not correctly installed (running kernel: 4.4.134-1-pve)
pve-manager: not correctly installed (running version: 4.4-24/08ba4d2d)
pve-kernel-4.4.98-2-pve: 4.4.98-101
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.98-3-pve: 4.4.98-103
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.117-2-pve: 4.4.117-110
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.98-5-pve: 4.4.98-105
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.134-1-pve: 4.4.134-112
pve-kernel-4.4.98-6-pve: 4.4.98-107
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.114-1-pve: 4.4.114-108
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+2
libqb0: 1.0.1-1
pve-cluster: 4.0-55
qemu-server: 4.0-115
pve-firmware: 1.1-12
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.1-9~pve4
pve-container: 1.0-106
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.8-2~pve4
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
pve-manager: not correctly installed (running version: 4.4-24/08ba4d2d)
pve-kernel-4.4.98-2-pve: 4.4.98-101
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.4.98-3-pve: 4.4.98-103
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.2.8-1-pve: 4.2.8-41
pve-kernel-4.4.117-2-pve: 4.4.117-110
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.98-5-pve: 4.4.98-105
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.134-1-pve: 4.4.134-112
pve-kernel-4.4.98-6-pve: 4.4.98-107
pve-kernel-4.4.19-1-pve: 4.4.19-66
pve-kernel-4.4.114-1-pve: 4.4.114-108
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.62-1-pve: 4.4.62-88
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+2
libqb0: 1.0.1-1
pve-cluster: 4.0-55
qemu-server: 4.0-115
pve-firmware: 1.1-12
libpve-common-perl: 4.0-96
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.9.1-9~pve4
pve-container: 1.0-106
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.8-2~pve4
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80