Problem with upgrade, udev-finish needs zfs-mount.service

Hfuhruhurr

New Member
May 16, 2017
6
0
1
53
Hi all,

I have a problem running apt update / apt dist-upgrade.

The error message (I ran only dpkg --configure udev to extract the message, same happens on dist-upgrade)


root@f:~# dpkg --configure udev
Setting up udev (215-17+deb8u7) ...
addgroup: The group `input' already exists as a system group. Exiting.
update-initramfs: deferring update (trigger activated)
insserv: Service zfs-mount has to be enabled to start service udev-finish
insserv: exiting now!
update-rc.d: error: insserv rejected the script header
dpkg: error processing package udev (--configure):
subprocess installed post-installation script returned error exit status 1
Processing triggers for initramfs-tools (0.120+deb8u2) ...
update-initramfs: Generating /boot/initrd.img-4.4.6-1-pve
Errors were encountered while processing:
udev

The status of zfs-mount.service:

root@f:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; static)
Active: active (exited) since Mon 2016-11-14 19:39:41 CET; 6 months 0 days ago
Main PID: 2647 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/zfs-mount.service

Information on the system

root@f:~# pveversion --verbose
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie


Any idea, what's wrong, anybody?

Thanks in advance,

Rainer
 
did you try enabling the zfs-mount service and retrying?
 
Thanks Fabian, the result is

root@f:~# systemctl enable zfs-mount.service
The unit files have no [Install] section. They are not meant to be enabled
using systemctl.
Possible reasons for having this kind of units are:
1) A unit may be statically enabled by being symlinked from another unit's
.wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
3) A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).

Regards,

Rainer
 
what does "systemctl cat zfs-mount" and "systemctl show zfs-mount" output?
 
root@f:/var/log# systemctl cat zfs-mount
# /lib/systemd/system/zfs-mount.service
[Unit]
Description=Mount ZFS filesystems
DefaultDependencies=no
Wants=zfs-import-cache.service
Wants=zfs-import-scan.service
Requires=systemd-udev-settle.service
After=systemd-udev-settle.service
After=zfs-import-cache.service
After=zfs-import-scan.service
Before=local-fs.target
Before=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zfs mount -a


root@f:/var/log# systemctl show zfs-mount
Type=oneshot
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=0
TimeoutStopUSec=1min 30s
WatchdogUSec=0
WatchdogTimestampMonotonic=0
StartLimitInterval=10000000
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=yes
GuessMainPID=yes
MainPID=0
ControlPID=0
Result=success
ExecMainStartTimestamp=Mon 2016-11-14 19:39:41 CET
ExecMainStartTimestampMonotonic=7552183
ExecMainExitTimestamp=Mon 2016-11-14 19:39:41 CET
ExecMainExitTimestampMonotonic=7984225
ExecMainPID=2647
ExecMainCode=1
ExecMainStatus=0
ExecStart={ path=/sbin/zfs ; argv[]=/sbin/zfs mount -a ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null)
Slice=system.slice
ControlGroup=/system.slice/zfs-mount.service
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=(null)
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
UMask=0022
LimitCPU=18446744073709551615
LimitFSIZE=18446744073709551615
LimitDATA=18446744073709551615
LimitSTACK=18446744073709551615
LimitCORE=18446744073709551615
LimitRSS=18446744073709551615
LimitNOFILE=4096
LimitAS=18446744073709551615
LimitNPROC=127907
LimitMEMLOCK=65536
LimitLOCKS=18446744073709551615
LimitSIGPENDING=127907
LimitMSGQUEUE=819200
LimitNICE=0
LimitRTPRIO=0
LimitRTTIME=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SecureBits=0
CapabilityBoundingSet=18446744073709551615
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=zfs-mount.service
Names=zfs-mount.service
Requires=systemd-udev-settle.service
Wants=zfs-import-cache.service zfs-import-scan.service system.slice
RequiredBy=zfs.target zfs-share.service
Before=zfs-share.service local-fs.target systemd-remount-fs.service
After=systemd-udev-settle.service zfs-import-cache.service zfs-import-scan.service systemd-journald.socket system.slice
Description=Mount ZFS filesystems
LoadState=loaded
ActiveState=active
SubState=exited
FragmentPath=/lib/systemd/system/zfs-mount.service
UnitFileState=static
InactiveExitTimestamp=Mon 2016-11-14 19:39:41 CET
InactiveExitTimestampMonotonic=7552212
ActiveEnterTimestamp=Mon 2016-11-14 19:39:41 CET
ActiveEnterTimestampMonotonic=7984306
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=no
OnFailureJobMode=replace
IgnoreOnIsolate=no
IgnoreOnSnapshot=no
NeedDaemonReload=no
JobTimeoutUSec=0
ConditionResult=yes
ConditionTimestamp=Mon 2016-11-14 19:39:41 CET
ConditionTimestampMonotonic=7540218
Transient=no
 
falls du kein ZFS verwendest, kannst du das "zfsutils" paket deinstallieren, dann upgraden, und dann wieder installieren. ansonsten musst du wohl einen workaround finden. z.b. das folgende:

Code:
systemctl cat zfs-mount > /etc/systemd/system/zfs-mount.service
echo "" >> /etc/systemd/system/zfs-mount.service
echo "[Install]" >> /etc/systemd/system/zfs-mount.service
echo "WantedBy=zfs.target" >> /etc/systemd/system/zfs-mount.service
systemctl daemon-reload

dann das upgrade probieren, und anschließend mit "rm /etc/systemd/system/zfs-mount.service && systemctl daemon-reload" den workaround wieder entfernen.
 
Thanks Fabian,

I am using ZFS.
After implementing the workaroung, I get the same behaviour. Details below.

root@f:/var/log# systemctl enable zfs-mount.service
Created symlink from /etc/systemd/system/zfs.target.wants/zfs-mount.service to /etc/systemd/system/zfs-mount.service.


root@f:/var/log# more /etc/systemd/system/zfs-mount.service
# /lib/systemd/system/zfs-mount.service
[Unit]
Description=Mount ZFS filesystems
DefaultDependencies=no
Wants=zfs-import-cache.service
Wants=zfs-import-scan.service
Requires=systemd-udev-settle.service
After=systemd-udev-settle.service
After=zfs-import-cache.service
After=zfs-import-scan.service
Before=local-fs.target
Before=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zfs mount -a


[Install]
WantedBy=zfs.target


root@f:/var/log# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/etc/systemd/system/zfs-mount.service; enabled)
Active: active (exited) since Mon 2016-11-14 19:39:41 CET; 6 months 2 days ago
Main PID: 2647 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/zfs-mount.service


Error message:
Preconfiguring packages ...
Setting up udev (215-17+deb8u7) ...
addgroup: The group `input' already exists as a system group. Exiting.
update-initramfs: deferring update (trigger activated)
insserv: Service zfs-mount has to be enabled to start service udev-finish
insserv: exiting now!
update-rc.d: error: insserv rejected the script header
dpkg: error processing package udev (--configure):
subprocess installed post-installation script returned error exit status 1
Processing triggers for initramfs-tools (0.120+deb8u2) ...
update-initramfs: Generating /boot/initrd.img-4.4.6-1-pve
Errors were encountered while processing:
udev
E: Sub-process /usr/bin/dpkg returned an error code (1)
 
that seems rather strange.. two more things that you could try to do
  • temporarily remove "Before=local-fs.target" from your copy of zfs-mount.service, "systemctl daemon-reload", try to finish the upgrade, and remove the copy and reload again
  • temporarily "chmod a-x /etc/init.d/udev-finish", finish the upgrade, "chmod a+x /etc/init.d/udev-finish", followed by "systemctl daemon-reload" and "update-rc.d udev-finish defaults"
 
Great! One step ahead.
The second one does it (chmod).
The update was processed.
I got 4 of the same errors like

insserv: Service zfs-mount has to be enabled to start service rpcbind
insserv: exiting now!

for the following packets

rpcbind
dbus
open-iscsi
postfix

So, I tried to use the same workaround on these ones (chmod ...) and succeeded for 2 of them. What's the english word for "hemdsärmelig"?

Still open-iscsi and postfix do not upgrade.


Setting up dbus (1.8.22-0+deb8u1) ...
Setting up rpcbind (0.2.1-6+deb8u2) ...
Setting up open-iscsi (2.0.873+git0.3b4b4500-8+deb8u2) ...
dpkg: error processing package open-iscsi (--configure):
subprocess installed post-installation script returned error exit status 102
Setting up postfix (2.11.3-1+deb8u2) ...
insserv: script postfix is not an executable regular file, skipped!

Postfix configuration was untouched. If you need to make changes, edit
/etc/postfix/main.cf (and others) as needed. To view Postfix configuration
values, see postconf(1).

After modifying main.cf, be sure to run '/etc/init.d/postfix reload'.

Running newaliases
dpkg: error processing package postfix (--configure):
subprocess installed post-installation script returned error exit status 102
Processing triggers for libc-bin (2.19-18+deb8u9) ...
Errors were encountered while processing:
open-iscsi
postfix
E: Sub-process /usr/bin/dpkg returned an error code (1)
 
okay, since both of those packages only restart their services at the end of the postinst, you should be able to get away with (temporarily) adding "exit 0" as second line of their respective postinst files in /var/lib/dpkg/info . then finish the configure step, and remove that line again (and revert the other workarounds ;)).

I do wonder why your PVE packages are so outdated if you just attempted an upgrade - possibly your PVE repository configuration is wrong?
 
Great! Everything works, all workarounds removed.

About the repo: you're right. the pve-enterprise repo was installed but we're using the community version.
I will switch to the no-subscription repo and update to 4.4 (unless somebody tells me not to since it might be dangerous).

Many thanks for your support. Really helpful and really quick!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!