Hello,
I want to remove an old container, but :
Then, I activated the VG :
After that, the container can be started or stopped, but not destroyed :
And the LV becomes inactive again...
Any idea ?
# pveversion -v
proxmox-ve: 5.1-31 (running kernel: 4.13.13-1-pve)
pve-manager: 5.1-40 (running version: 5.1-40/ea05b379)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.13.13-1-pve: 4.13.13-31
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
proxmox-ve: 5.1-31 (running kernel: 4.13.13-1-pve)
pve-manager: 5.1-40 (running version: 5.1-40/ea05b379)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.13.13-1-pve: 4.13.13-31
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
I want to remove an old container, but :
Code:
# pct start 514
Job for pve-container@514.service failed because the control process exited with error code.
See "systemctl status pve-container@514.service" and "journalctl -xe" for details.
command 'systemctl start pve-container@514' failed: exit code 1
Code:
# lxc-start -n 514 -F
lxc-start: 514: conf.c: run_buffer: 438 Script exited with status 32.
lxc-start: 514: start.c: lxc_init: 651 Failed to run lxc.hook.pre-start for container "514".
lxc-start: 514: start.c: __lxc_start: 1444 Failed to initialize container "514".
lxc-start: 514: tools/lxc_start.c: main: 371 The container failed to start.
lxc-start: 514: tools/lxc_start.c: main: 375 Additional information can be obtained by setting the --logfile and --logpriority options.
Code:
# pct mount 514
mount: special device /dev/pve/vm-514-disk-1 does not exist
Code:
# lvscan
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
...
inactive '/dev/pve/vm-514-disk-1' [30.00 GiB] inherit
...
Then, I activated the VG :
Code:
# vgchange -a y
14 logical volume(s) in volume group "pve" now active
root@alfirin:~# lvscan
ACTIVE '/dev/pve/swap' [8.00 GiB] inherit
ACTIVE '/dev/pve/vm-514-disk-1' [30.00 GiB] inherit
After that, the container can be started or stopped, but not destroyed :
Code:
# pct destroy 514
device-mapper: message ioctl on (253:4) failed: Operation not supported
Failed to process thin pool message "delete 7".
Failed to suspend pve/data with queued messages.
lvremove 'pve/vm-514-disk-1' error: Failed to update pool pve/data.
And the LV becomes inactive again...
Any idea ?