About 2 years ago one of my friend helped me create a server to store some files and have some backup just in case.
I haven't used it for quite some time, and just a few days ago I wanted to upload some files to it and the CT doesn't start for some reason.
I tried to fix it but realized I absolutely have no idea what I'm doing. I tried to watch some videos, search for solutions on the forum but I think I just made the problem worse...
I'm at my limits and on the verge of getting a stroke. If someone would be so kind to help me just this once I promise I will get back to TrueNAS and swear I will never dare to touch proxmox again.
Original error:
I tried to give it more space:
Then I converted the CT into a template and thus can't even try to start it anymore and can't convert it back. I have a few backups but I don't know what is what and have some data on it since the last backup so restoring it would mean some data loss.
I haven't used it for quite some time, and just a few days ago I wanted to upload some files to it and the CT doesn't start for some reason.
I tried to fix it but realized I absolutely have no idea what I'm doing. I tried to watch some videos, search for solutions on the forum but I think I just made the problem worse...
I'm at my limits and on the verge of getting a stroke. If someone would be so kind to help me just this once I promise I will get back to TrueNAS and swear I will never dare to touch proxmox again.
Code:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 931.5G 0 part
└─sda9 8:9 0 8M 0 part
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part /mnt/pve/Backup
sdc 8:32 0 447.1G 0 disk
├─sdc1 8:33 0 1007K 0 part
├─sdc2 8:34 0 1G 0 part
└─sdc3 8:35 0 446.1G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 98G 0 lvm /
├─pve-data_tmeta 253:2 0 3.3G 0 lvm
│ └─pve-data-tpool 253:4 0 319.6G 0 lvm
│ ├─pve-data 253:5 0 319.6G 1 lvm
│ └─pve-base--100--disk--1 253:6 0 965G 1 lvm
└─pve-data_tdata 253:3 0 319.6G 0 lvm
└─pve-data-tpool 253:4 0 319.6G 0 lvm
├─pve-data 253:5 0 319.6G 1 lvm
└─pve-base--100--disk--1 253:6 0 965G 1 lvm
Code:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
base-100-disk-0 pve Vri---tz-k 10.00g data
base-100-disk-1 pve Vri-a-tz-k 965.00g data 33.01
data pve twi-aotzD- <319.61g 100.00 3.90
root pve -wi-ao---- 98.00g
swap pve -wi-ao---- 8.00g
Original error:
Code:
run_buffer: 322 Script exited with status 32
lxc_init: 844 Failed to run lxc.hook.pre-start for container "100"
__lxc_start: 2027 Failed to initialize container "100"
TASK ERROR: startup for container '100' failed
I tried to give it more space:
Code:
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Size of logical volume pve/vm-100-disk-0 changed from 8.00 GiB (2048 extents) to 10.00 GiB (2560 extents).
Logical volume pve/vm-100-disk-0 successfully resized.
WARNING: Sum of all thin volume sizes (975.00 GiB) exceeds the size of thin pool pve/data and the size of whole volume group (<446.13 GiB).
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/pve/vm-100-disk-0: 28037/524288 files (0.1% non-contiguous), 301725/2097152 blocks
resize2fs 1.46.5 (30-Dec-2021)
resize2fs: Input/output error while trying to resize /dev/pve/vm-100-disk-0
Please run 'e2fsck -fy /dev/pve/vm-100-disk-0' to fix the filesystem
after the aborted resize operation.
Resizing the filesystem on /dev/pve/vm-100-disk-0 to 2621440 (4k) blocks.
Failed to update the container's filesystem: command 'resize2fs /dev/pve/vm-100-disk-0' failed: exit code 1
TASK OK
Then I converted the CT into a template and thus can't even try to start it anymore and can't convert it back. I have a few backups but I don't know what is what and have some data on it since the last backup so restoring it would mean some data loss.