Hi
Just now, I found that my server could not connect to the network normally
So I did a hard restart
After restarting, I found that an LxC container in my server could not be started normally, and the prompt is as follows
------------
Job for pve-container@103.service failed because the control process exited with error code.
See "systemctl status pve-container@103.service" and "journalctl -xe" for details.
TASK ERROR: command 'systemctl start pve-container@103' failed: exit code 1
-----------
I view threads of similar issues in the Forum
I made the following attempt.
------------
root@node1:~# lxc-start -lDEBUG -o YOURLOGFILE.log -F -n 103
lxc-start: 103: conf.c: run_buffer: 335 Script exited with status 25
lxc-start: 103: start.c: lxc_init: 861 Failed to run lxc.hook.pre-start for container "103"
lxc-start: 103: start.c: __lxc_start: 1944 Failed to initialize container "103"
lxc-start: 103: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: 103: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
------------
------------
root@node1:~# pct mount 103
mounted CT 103 in '/var/lib/lxc/103/rootfs'
root@node1:~# pct unmount 103
root@node1:~# pct fsck 103 --force
unable to run fsck for 'local-zfs:subvol-103-disk-0' (format == subvol)
------------
------------
root@node1:~# systemctl status -l zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Condition: start condition failed at Mon 2019-12-09 21:00:50 HST; 41min ago
└─ ConditionPathExists=/etc/zfs/zpool.cache was not met
Docs: man:zpool(8)
Dec 09 21:00:50 node1 systemd[1]: Condition check resulted in Import ZFS pools by cache file being skipped.
root@node1:~# systemctl status -l zfs-import-scan.service
● zfs-import-scan.service - Import ZFS pools by device scanning
Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:zpool(8)
------------
------------
root@node1:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
------------
In addition, I try to clone this container. After cloning, the new container still cannot start
Can someone give me some advice?
Just now, I found that my server could not connect to the network normally
So I did a hard restart
After restarting, I found that an LxC container in my server could not be started normally, and the prompt is as follows
------------
Job for pve-container@103.service failed because the control process exited with error code.
See "systemctl status pve-container@103.service" and "journalctl -xe" for details.
TASK ERROR: command 'systemctl start pve-container@103' failed: exit code 1
-----------
I view threads of similar issues in the Forum
I made the following attempt.
------------
root@node1:~# lxc-start -lDEBUG -o YOURLOGFILE.log -F -n 103
lxc-start: 103: conf.c: run_buffer: 335 Script exited with status 25
lxc-start: 103: start.c: lxc_init: 861 Failed to run lxc.hook.pre-start for container "103"
lxc-start: 103: start.c: __lxc_start: 1944 Failed to initialize container "103"
lxc-start: 103: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: 103: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
------------
------------
root@node1:~# pct mount 103
mounted CT 103 in '/var/lib/lxc/103/rootfs'
root@node1:~# pct unmount 103
root@node1:~# pct fsck 103 --force
unable to run fsck for 'local-zfs:subvol-103-disk-0' (format == subvol)
------------
------------
root@node1:~# systemctl status -l zfs-import-cache.service
● zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Condition: start condition failed at Mon 2019-12-09 21:00:50 HST; 41min ago
└─ ConditionPathExists=/etc/zfs/zpool.cache was not met
Docs: man:zpool(8)
Dec 09 21:00:50 node1 systemd[1]: Condition check resulted in Import ZFS pools by cache file being skipped.
root@node1:~# systemctl status -l zfs-import-scan.service
● zfs-import-scan.service - Import ZFS pools by device scanning
Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: man:zpool(8)
------------
------------
root@node1:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
------------
In addition, I try to clone this container. After cloning, the new container still cannot start
Can someone give me some advice?