Systemctl status 'degraded' in unprivileged container

Nic M

New Member
Mar 21, 2019
2
0
1
39
I need help to get containers working right in my installation.

I installed PVE recently and now i am trying to create LXC containers. On logging in, and systemd status I get the following:
Code:
root@plxc-base:~# systemctl status
* plxc-base
    State: degraded
     Jobs: 0 queued
   Failed: 3 units
    Since: Fri 2019-03-22 01:29:23 UTC; 4min 41s ago
Here are the failed units:
Code:
root@plxc-base:~# systemctl --failed
  UNIT                          LOAD   ACTIVE SUB    DESCRIPTION
* sys-kernel-config.mount       loaded failed failed Configuration File System
* sys-kernel-debug.mount        loaded failed failed Debug File System
* systemd-journald-audit.socket loaded failed failed Journal Audit Socket

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

3 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

Another thing I noticed is
Code:
root@plxc-base:~# systemctl --user status
Failed to connect to bus: No such file or directory
Some searching around points to this being because $XDG_RUNTIME_DIR variable is unset. What I don't know is whether that is also related to the units that failed to load.

I need help to diagnose what could cause the units to fail to activate. I am using zfs as my file system and the containers are installed from the debian 9.7 template in pveam.

Here are my installation details:
Code:
root@proxmox1:~# pveversion --verbose
proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve)
pve-manager: 5.3-11 (running version: 5.3-11/d4907f84)
pve-kernel-4.15: 5.3-2
pve-kernel-4.15.18-11-pve: 4.15.18-34
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: not correctly installed
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-3
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-47
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-12
libpve-storage-perl: 5.0-39
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-23
pve-cluster: 5.0-33
pve-container: 2.0-35
pve-docs: 5.3-3
pve-edk2-firmware: 1.20181023-1
pve-firewall: 3.0-18
pve-firmware: 2.0-6
pve-ha-manager: 2.0-8
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 2.12.1-2
pve-xtermjs: 3.10.1-2
qemu-server: 5.0-47
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1

I could not use the PVE installer as I am using UEFI and zfs on the boot drive so I installed debian first and then PVE on that.
 
How you installed pve? From proxmox iso file or debian method?

What does
Code:
systemctl status *.mount
prints for you?
I installed by debian method. I am using UEFI and have my debian root on zfs and the recommended way to get pve in this setup was debian method.
What does
Code:
systemctl status *.mount
prints for you?
here is on pve host:
Code:
root@proxmox1:~# systemctl status *.mount
Warning: pxmxrpool-vmdata-subvol\x2d101\x2ddisk\x2d0.mount changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: var-lib-nfs.mount changed on disk. Run 'systemctl daemon-reload' to reload units.
● sys-kernel-debug.mount - Debug File System
   Loaded: loaded (/lib/systemd/system/sys-kernel-debug.mount; static; vendor preset: enabled)
   Active: active (mounted) since Tue 2019-03-19 21:05:20 EDT; 2 days ago
    Where: /sys/kernel/debug
     What: debugfs
     Docs: 
  Process: 761 ExecMount=/bin/mount debugfs /sys/kernel/debug -t debugfs (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   Memory: 112.0K
      CPU: 2ms
   CGroup: /system.slice/sys-kernel-debug.mount

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

● pxmxrpool-vmdata-subvol\x2d101\x2ddisk\x2d0.mount - /pxmxrpool/vmdata/subvol-101-disk-0
   Loaded: loaded (/proc/self/mountinfo)
   Active: active (mounted) since Wed 2019-03-20 21:33:55 EDT; 1 day 10h ago
    Where: /pxmxrpool/vmdata/subvol-101-disk-0
     What: pxmxrpool/vmdata/subvol-101-disk-0
    Tasks: 0 (limit: 4915)
   Memory: 0B
      CPU: 0
   CGroup: /system.slice/pxmxrpool-vmdata-subvol\x2d101\x2ddisk\x2d0.mount

● var-lib-nfs.mount - /var/lib/nfs
   Loaded: loaded (/proc/self/mountinfo)
   Active: active (mounted) since Tue 2019-03-19 21:05:24 EDT; 2 days ago
    Where: /var/lib/nfs
     What: pxmxrpool/var/nfs
    Tasks: 0 (limit: 4915)
   Memory: 0B
      CPU: 0
   CGroup: /system.slice/var-lib-nfs.mount

● boot-efi1.mount - /boot/efi1
   Loaded: loaded (/etc/fstab; generated; vendor preset: enabled)
   Active: active (mounted) since Tue 2019-03-19 21:05:23 EDT; 2 days ago
    Where: /boot/efi1
     What: /dev/nvme1n1p3
     Docs: man:fstab(5)
           man:systemd-fstab-generator(8)
  Process: 1438 ExecMount=/bin/mount /dev/disk/by-partuuid/1a171738-6fbd-4ba4-84e5-93ed3bf2309e /boot/efi1 -t vfat -o noatime (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   Memory: 100.0K
      CPU: 1ms
   CGroup: /system.slice/boot-efi1.mount

Mar 19 21:05:23 proxmox1 systemd[1]: Mounting /boot/efi1...
Mar 19 21:05:23 proxmox1 systemd[1]: Mounted /boot/efi1.
Warning: srv.mount changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: pxmxrpool-vmdata.mount changed on disk. Run 'systemctl daemon-reload' to reload units.
Warning: opt.mount changed on disk. Run 'systemctl daemon-reload' to reload units.

● dev-hugepages.mount - Huge Pages File System
   Loaded: loaded (/lib/systemd/system/dev-hugepages.mount; static; vendor preset: enabled)
   Active: active (mounted) since Tue 2019-03-19 21:05:20 EDT; 2 days ago
    Where: /dev/hugepages
     What: hugetlbfs
     Docs:
 
... I so wish systemd won't be as intrusive as clients complains about this exact issue
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!