[SOLVED] Container not start after upgrade

iruindegi

Well-Known Member
Aug 26, 2016
52
0
46
Zarautz
Hi,
After upgraded to Proxmox 6.2-10 I created a new CT with debian 10 template but is not working. This is the log output when I launch it in debug mode with
lxc-start -n 101 -F -l DEBUG -o /tmp/101.log

Code:
lxc-start 101 20200801083912.726 INFO     confile - confile.c:set_config_idmaps:2051 - Read uid map: type u nsid 0 hostid 100000 range 65536
lxc-start 101 20200801083912.727 INFO     confile - confile.c:set_config_idmaps:2051 - Read uid map: type g nsid 0 hostid 100000 range 65536
lxc-start 101 20200801083912.728 INFO     lsm - lsm/lsm.c:lsm_init:29 - LSM security driver AppArmor
lxc-start 101 20200801083912.728 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "101", config section "lxc"
lxc-start 101 20200801083914.147 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: mount:
lxc-start 101 20200801083914.147 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: /var/lib/lxc/.pve-staged-mounts/rootfs: can't read superblock on /dev/mapper/pve-vm--101--disk--0.
lxc-start 101 20200801083914.147 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output:

lxc-start 101 20200801083914.155 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 101 lxc pre-start produced output: command 'mount /dev/dm-8 /var/lib/lxc/.pve-staged-mounts/rootfs' failed: exit code 32

lxc-start 101 20200801083914.188 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 32
lxc-start 101 20200801083914.188 ERROR    start - start.c:lxc_init:804 - Failed to run lxc.hook.pre-start for container "101"
lxc-start 101 20200801083914.188 ERROR    start - start.c:__lxc_start:1903 - Failed to initialize container "101"
lxc-start 101 20200801083914.189 INFO     conf - conf.c:run_script_argv:340 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "101", config section "lxc"
lxc-start 101 20200801083915.620 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: umount:
lxc-start 101 20200801083915.621 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: /var/lib/lxc/101/rootfs: not mounted
lxc-start 101 20200801083915.622 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output:

lxc-start 101 20200801083915.626 DEBUG    conf - conf.c:run_buffer:312 - Script exec /usr/share/lxc/hooks/lxc-pve-poststop-hook 101 lxc post-stop produced output: command 'umount --recursive -- /var/lib/lxc/101/rootfs' failed: exit code 1

lxc-start 101 20200801083915.771 ERROR    conf - conf.c:run_buffer:323 - Script exited with status 1
lxc-start 101 20200801083915.772 ERROR    start - start.c:lxc_end:971 - Failed to run lxc.hook.post-stop for container "101"
lxc-start 101 20200801083915.772 ERROR    lxc_start - tools/lxc_start.c:main:308 - The container failed to start
lxc-start 101 20200801083915.772 ERROR    lxc_start - tools/lxc_start.c:main:314 - Additional information can be obtained by setting the --logfile and --logpriority options

and this is the ouput for pct mount 101:

Code:
mount: /var/lib/lxc/101/rootfs: can't read superblock on /dev/mapper/pve-vm--101--disk--0.
mounting container failed
command 'mount /dev/dm-8 /var/lib/lxc/101/rootfs//' failed: exit code 32

Code:
➜  ~ lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotzD-   10.00g             100.00 50.65
  root          pve -wi-ao---- <214.61g
  swap          pve -wi-ao----    8.00g
  vm-100-disk-0 pve Vwi-a-tz--   80.00g data        12.32
  vm-101-disk-0 pve Vwi-a-tz--   16.00g data        0.90
  vm-222-disk-0 pve Vwi-aotz--    8.00g data        0.00

Code:
➜  ~ pct config 101
arch: amd64
cores: 1
hostname: pihole
lock: mounted
memory: 512
nameserver: 192.198.2.1
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=6E:46:9C:5E:09:1F,ip=dhcp,type=veth
ostype: debian
rootfs: local-lvm:vm-101-disk-0,size=16G
swap: 512
unprivileged: 1

Code:
➜  ~ pct fsck 101
fsck from util-linux 2.33.1
/dev/mapper/pve-vm--101--disk--0: recovering journal
fsck.ext4: Input/output error while recovering journal of /dev/mapper/pve-vm--101--disk--0
/dev/mapper/pve-vm--101--disk--0 contains a file system with errors, check forced.
/dev/mapper/pve-vm--101--disk--0: Entry 'etc' in / (2) has deleted/unused inode 786433.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'run' in / (2) has deleted/unused inode 262145.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'proc' in / (2) has deleted/unused inode 393217.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'sys' in / (2) has deleted/unused inode 917505.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'home' in / (2) has deleted/unused inode 655361.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'bin' in / (2) has deleted/unused inode 131073.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'tmp' in / (2) has deleted/unused inode 393218.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'mnt' in / (2) has deleted/unused inode 786435.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'opt' in / (2) has deleted/unused inode 655362.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'usr' in / (2) has deleted/unused inode 917506.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'media' in / (2) has deleted/unused inode 786436.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'dev' in / (2) has deleted/unused inode 262148.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'sbin' in / (2) has deleted/unused inode 393219.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'lib64' in / (2) has deleted/unused inode 655363.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'srv' in / (2) has deleted/unused inode 655365.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'root' in / (2) has deleted/unused inode 786437.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'boot' in / (2) has deleted/unused inode 262149.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'lib' in / (2) has deleted/unused inode 393324.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry 'var' in / (2) has deleted/unused inode 262150.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry '..' in <927846>/<12> (12) has deleted/unused inode 927846.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry '..' in <927846>/<14> (14) has deleted/unused inode 927846.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry '..' in <927846>/<16> (16) has deleted/unused inode 927846.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry '..' in <927846>/<18> (18) has deleted/unused inode 927846.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry '..' in <927846>/<20> (20) has deleted/unused inode 927846.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Entry '..' in <927846>/<22> (22) has deleted/unused inode 927846.  CLEARED.
/dev/mapper/pve-vm--101--disk--0: Directory inode 24, block #0, offset 0: directory has no checksum.
FIXED.
/dev/mapper/pve-vm--101--disk--0: Directory inode 24, block #0, offset 0: directory corrupted


/dev/mapper/pve-vm--101--disk--0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
        (i.e., without -a or -p options)
command 'fsck -a -l /dev/pve/vm-101-disk-0' failed: exit code 4

Any help or clue??
 
Last edited:
hi,

it looks like there's a problem with the disk.

can you post the output of:
Code:
lsblk -f
lvs -a
vgs -a
 
hi, here I go:
Code:
➜  ~ lsblk -f
NAME                         FSTYPE            LABEL    UUID                                   FSAVAIL FSUSE% MOUNTPOINT
sda                          zfs_member
├─sda1                       linux_raid_member          879004fa-7005-3ac9-3017-a5a8c86610be
├─sda2                       linux_raid_member          e47655d9-730f-9317-2449-9218d5782fff
└─sda3                       linux_raid_member Obelix:2 a9f61a05-69c3-41bf-51b9-dcf08b1c714a
sdb
├─sdb1                       linux_raid_member          879004fa-7005-3ac9-3017-a5a8c86610be
├─sdb2                       linux_raid_member          e47655d9-730f-9317-2449-9218d5782fff
└─sdb3                       linux_raid_member Obelix:3 d7596c93-6ff7-8106-b7a7-8d01ae232c02
sdc
├─sdc1                       linux_raid_member          879004fa-7005-3ac9-3017-a5a8c86610be
├─sdc2                       linux_raid_member          e47655d9-730f-9317-2449-9218d5782fff
└─sdc3                       linux_raid_member Obelix:4 5ab5eeff-84ec-ad37-c201-fb5c020c43bf
sdd
├─sdd1                       linux_raid_member          879004fa-7005-3ac9-3017-a5a8c86610be
├─sdd2                       linux_raid_member          e47655d9-730f-9317-2449-9218d5782fff
└─sdd3                       linux_raid_member Obelix:5 ba068069-9943-3011-6003-4a86b38b25bb
sde
└─sde1                       ext2                       fa8f3721-cd77-4ee4-a2fa-7e6960d9ff51
sdf
├─sdf1
├─sdf2                       vfat                       4A75-7625
└─sdf3                       LVM2_member                LS4Sy3-mzfT-6IHA-7xPM-YsZ0-MK9U-HqosNU
  ├─pve-root                 ext4                       71d02c95-043f-456b-b4f5-000b51c8a364    182.1G    10% /
  ├─pve-swap                 swap                       51051cab-cde7-4c8b-a277-21b3a82bc3a6                  [SWAP]
  ├─pve-data_tmeta
  │ └─pve-data-tpool
  │   ├─pve-data
  │   ├─pve-vm--100--disk--0
  │   ├─pve-vm--222--disk--0
  │   └─pve-vm--101--disk--0 ext4                       6c64572c-7ce7-4855-9067-b5a0ae462fde
  └─pve-data_tdata
    └─pve-data-tpool
      ├─pve-data
      ├─pve-vm--100--disk--0
      ├─pve-vm--222--disk--0
      └─pve-vm--101--disk--0 ext4                       6c64572c-7ce7-4855-9067-b5a0ae462fde

Code:
➜  ~ lvs -a
  LV              VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotz--   10.00g             100.00 50.65
  [data_tdata]    pve Twi-ao----   10.00g
  [data_tmeta]    pve ewi-ao----   12.00m
  [lvol0_pmspare] pve ewi-------   12.00m
  root            pve -wi-ao---- <214.61g
  swap            pve -wi-ao----    8.00g
  vm-100-disk-0   pve Vwi-a-tz--   80.00g data        12.32
  vm-101-disk-0   pve Vwi-a-tz--   16.00g data        0.90
  vm-222-disk-0   pve Vwi-aotz--    8.00g data        0.00


Code:
➜  ~ vgs -a
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   6   0 wz--n- 232.63g    0
 
are you having any problems with the other containers or is it just this one?

for example if you run pct mount 222 does it work?

if none of them are working, then it could be an issue with the physical disk, in which case i suggest you to take a look at the smartctl -a output
 
Hi
222 is not a contianer, is a VM....
I expent some time with my coworker and he told me that he resized the pve-root partition and errors come since then. I checked the history file and see

475 pve5to6 476 fdisk -l 477 e2fsck -f /dev/mapper/pve-data 478 ls /sys/block/ 479 lvdisplay 480 ls /dev/mapper/pve-data 481 ls /dev/mapper/pve-data 482 mount | grep pve-data 483 lvremove /dev/pve/data -y 484 lvcreate -L 10G - data pve -T 485 lvcreate -L 10G -n data pve -T 486 lvcreate -L 10G -n data pve 487 lvcreate -L 10G -n data pve -Tlvresize -l +100%FREE /dev/pve/root 488 lvcreate -L 10G -n data pve -Tlvresize --l +100%FREE /dev/pve/root 489 resize2fs /dev/mapper/pve-root 490 lvresize -l +100%FREE /dev/pve/root 491 resize2fs /dev/mapper/pve-root

The prioblem was when he tried to update proxmox from 5 to 6, pve5to6 script has fail because there were only %1 space free

I think this is where the problem is but how to solve it?
 
could you try creating another container with the same settings and see if it works?
 
I created a new CT (8Gb) and in the creation log I see
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Logical volume "vm-101-disk-0" created.
WARNING: Sum of all thin volume sizes (96.00 GiB) exceeds the size of thin pool pve/data (no free space in volume group).

The ct is created correctly but it's not starting:
➜ ~ pct start 101
➜ ~ pct mount 101
mount: /var/lib/lxc/101/rootfs: can't read superblock on /dev/mapper/pve-vm--101--disk--0.
mounting container failed
command 'mount /dev/dm-8 /var/lib/lxc/101/rootfs//' failed: exit code 32
 
okay so it seems like your data volume is full and that's why it's failing.

your colleague resized the data volume to 10G, which is clearly not enough for your use case.

instead the root has more available now.

this will not "fix" your problem of space, but you can try creating a directory storage somewhere on your filesystem and move your containers disks there (since root has space a directory storage can be used). moving the container disks can be done on the GUI.
your containers should work afterwards.
 
great. you can mark the thread as [SOLVED] so others know what to expect :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!