I do.If you run the next command, can you see the container in the list displayed?
Bash:pct list
proxmox-ve: 6.4-1 (running kernel: 5.4.151-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-10
pve-kernel-helper: 6.4-10
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.151-1-pve: 5.4.151-1
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve1~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.6-pve1~bpo10+1
arch: amd64
cores: 1
hostname: device-network-scanner
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,gw=<ip>,hwaddr=<mac>,ip=<ip>,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-108-disk-0,size=8G
swap: 512
unprivileged: 1
run_buffer: 314 Script exited with status 32
lxc_init: 798 Failed to run lxc.hook.pre-start for container "108"
__lxc_start: 1945 Failed to initialize container "108"
type g nsid 0 hostid 100000 range 65536
INFO lsm - lsm/lsm.c:lsm_init:40 - Initialized LSM security driver AppArmor
INFO conf - conf.c:run_script_argv:331 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "108", config section "lxc"
DEBUG conf - conf.c:run_buffer:303 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 108 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--108--disk--0, missing codepage or helper program, or other error.
DEBUG conf - conf.c:run_buffer:303 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 108 lxc pre-start produced output: command 'mount /dev/dm-8 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32
ERROR conf - conf.c:run_buffer:314 - Script exited with status 32
ERROR start - start.c:lxc_init:798 - Failed to run lxc.hook.pre-start for container "108"
ERROR start - start.c:__lxc_start:1945 - Failed to initialize container "108"
INFO conf - conf.c:run_script_argv:331 - Executing script "/usr/share/lxcfs/lxc.reboot.hook" for container "108", config section "lxc"
DoesCode:DEBUG conf - conf.c:run_buffer:303 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 108 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--108--disk--0, missing codepage or helper program, or other error.
pct mount 108
also fail with the same error? Seems like the logical volume or filesystem on it might've been corrupted. What is the output of lsblk -o NAME,FSTYPE /dev/mapper/pve-vm--108--disk--0
? It should show ext4
. If it does, you can try running fsck.ext4 /dev/mapper/pve-vm--108--disk--0
. (Make a copy of the volume first if you want to be extra careful).Yes, certainly worth a try. I'd first make a copy of the volume to be sure, e.g.pct mount 108
mount: /var/lib/lxc/108/rootfs: wrong fs type, bad option, bad superblock on /dev/mapper/pve-vm--108--disk--0, missing codepage or helper program, or other error.
mounting container failed
command 'mount /dev/dm-8 /var/lib/lxc/108/rootfs//' failed: exit code 32
lsblk -o NAME,FSTYPE /dev/mapper/pve-vm--108--disk--0
NAME FSTYPE
pve-vm--108--disk--0
fsck.ext4 /dev/mapper/pve-vm--108--disk--0
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
Superblock has invalid MMP magic. Fix<y>?
Should I do this?
dd if=/dev/mapper/pve-vm--108--disk--0 of=</dir/with/enough/space>vm-108-disk-0.backup
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
Superblock has invalid MMP magic. Fix<y>? yes
Superblock has an invalid journal (inode 8).
Clear<y>? yes
*** journal has been deleted ***
The filesystem size (according to the superblock) is 13107200 blocks
The physical size of the device is 2097152 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
/dev/mapper/pve-vm--108--disk--0: ***** FILE SYSTEM WAS MODIFIED *****