[SOLVED] Failed to start LXC and VE, I/O error.

linaste

New Member
Aug 20, 2023
21
2
3
When I tried to start a LXC, syslog showed:
Bash:
kernel: loop0: detected capacity change from 0 to 20971520
kernel: EXT4-fs warning (device loop0): ext4_multi_mount_protect:328: MMP interval 42 higher than expected, please wait.

CRON[21319]: pam_unix(cron:session): session closed for user root
kernel: loop: Write error at byte offset 37916672, length 4096.
kernel: I/O error, dev loop0, sector 74056 op 0x1:(WRITE) flags 0x3800 phys_seg 1 prio class 2
kernel: Buffer I/O error on dev loop0, logical block 9257, lost sync page write
pvestatd[1069]: unable to get PID for CT 104 (not running?)
pvedaemon[18764]: unable to get PID for CT 104 (not running?)
pvestatd[1069]: status update time (15.457 seconds)

Then I tried to start a VE, and a warning icon replaced the green check icon, said io-error.

Manager Version: pve-manager/8.1.3/b46aac3b42da5d15
Kernel Version: Linux 6.5.11-4-pve (2023-11-20T10:19Z)
 
Hi,

Does only container 104 have this issue? Could you please post the 104 container config? pct config 104

Have you checked the health of your disks?
 
Hi,

Does only container 104 have this issue?
No, all of my containers (VMs and LXCs) cannot start properly. LXCs will show the same errors above, and VMs will show "io-error" status after launch :io.png

Could you please post the 104 container config?
Yes. The whole config is attached below:
Bash:
arch: amd64
cores: 4
hostname: 86-pt
memory: 2048
mp0: /mnt/Share,mp=/home/linaste/Share
nameserver: 192.168.1.1 2402:4e00::
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=DA:DE:12:D9:AB:CD,ip=192.168.1.2/24,ip6=auto,type=veth
onboot: 1
ostype: debian
rootfs: local-nvme:104/vm-104-disk-0.raw,size=10G
swap: 0
lxc.apparmor.profile: unconfined
lxc.mount.entry: /mnt/Share /home/linaste/Share none,bind,optional,shared,create=dir

Have you checked the health of your disks?
I ran btrfs scrub status /dev/mmcblk0p2
Bash:
btrfs scrub status /dev/mmcblk0p2
Status:           running
Duration:         0:02:15
Time left:        0:01:57
Total to scrub:   26.65GiB
Bytes scrubbed:   14.25GiB  (53.46%)
Rate:             108.06MiB/s
Error summary:    no errors found
 
I ran btrfs scrub status /dev/mmcblk0p2
I would check the disk healty using smartcl instead.

Also, if you could provide us with the full syslog that would help us to identify what might cause the issue.

Lastly, please provide us with the output of pveversion -v as well.
 
Sorry for the delay.

I would check the disk healty using smartcl instead.
Unfortunately, I run this system on emmc, and it doesn't support self test logging.

Instead, I check the /sys/block/mmcblk0/device/life_time file, and it returns: 0x01 0x02. Seems like it still remains a healthy condition.

Also, if you could provide us with the full syslog that would help us to identify what might cause the issue.
The complete syslog since power on

Lastly, please provide us with the output of pveversion -v as well.
Bash:
> pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2.16-14-pve: 6.2.16-14
proxmox-kernel-6.2.16-10-pve: 6.2.16-10
proxmox-kernel-6.2.16-8-pve: 6.2.16-8
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx7
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.3
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.5
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1
 
It turns out that the drive containing one of my vm's image was broken or somehow, I focused on the system drive and ignored it. I ran btrfs scrub start /mnt/local-nvme (/mnt/local-nvme was my image drive) and it showed an error, I removed the file and the problem was solved.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!