I have solved the problem, I will leave the solution for the future, just in case it will help someone:
In the terminal of the proxmox cluster enter the command
Code:
root@proxmox:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
└─sda1 8:1 0 465.8G 0 part /mnt/pve/Hdd500
sdb 8:16 0 465.8G 0 disk
├─sdb1 8:17 0 500M 0 part
└─sdb2 8:18 0 465.3G 0 part
sdc 8:32 0 238.5G 0 disk
└─sdc1 8:33 0 238.5G 0 part /mnt/pve/SSD250
sdd 8:48 0 465.8G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part /boot/efi
└─sdd3 8:51 0 465.3G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 3.5G 0 lvm
│ └─pve-data-tpool 253:4 0 338.4G 0 lvm
│ ├─pve-data 253:5 0 338.4G 1 lvm
│ ├─pve-vm--200--disk--0 253:6 0 20G 0 lvm
│ ├─pve-vm--100--disk--0 253:7 0 42G 0 lvm
│ ├─pve-vm--110--disk--0 253:8 0 8G 0 lvm
│ ├─pve-vm--201--disk--0 253:9 0 32G 0 lvm
│ ├─pve-vm--201--state--VoiceRecognition 253:10 0 4.5G 0 lvm
│ └─pve-vm--201--state--Zybi 253:11 0 4.5G 0 lvm
└─pve-data_tdata 253:3 0 338.4G 0 lvm
└─pve-data-tpool 253:4 0 338.4G 0 lvm
├─pve-data 253:5 0 338.4G 1 lvm
├─pve-vm--200--disk--0 253:6 0 20G 0 lvm
├─pve-vm--100--disk--0 253:7 0 42G 0 lvm
├─pve-vm--110--disk--0 253:8 0 8G 0 lvm
├─pve-vm--201--disk--0 253:9 0 32G 0 lvm
├─pve-vm--201--state--VoiceRecognition 253:10 0 4.5G 0 lvm
└─pve-vm--201--state--Zybi 253:11 0 4.5G 0 lvm
sde 8:64 0 3.6T 0 disk
└─sde1 8:65 0 3.6T 0 part /mnt/mydisk
Problem with disk sde1. Unfortunately, one of the old mount point addresses was pulled up.
Fixing it:
Code:
root@proxmox:~# umount /mnt/mydisk
root@proxmox:~# mount /dev/sde1 /mnt/pve/Hdd4TB
Check that everything is correct
Code:
root@proxmox:~# lsblk
sdd 8:48 0 465.8G 0 disk
├─sdd1 8:49 0 1007K 0 part
├─sdd2 8:50 0 512M 0 part /boot/efi
└─sdd3 8:51 0 465.3G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 3.5G 0 lvm
│ └─pve-data-tpool 253:4 0 338.4G 0 lvm
│ ├─pve-data 253:5 0 338.4G 1 lvm
│ ├─pve-vm--200--disk--0 253:6 0 20G 0 lvm
│ ├─pve-vm--100--disk--0 253:7 0 42G 0 lvm
│ ├─pve-vm--110--disk--0 253:8 0 8G 0 lvm
│ ├─pve-vm--201--disk--0 253:9 0 32G 0 lvm
│ ├─pve-vm--201--state--VoiceRecognition 253:10 0 4.5G 0 lvm
│ └─pve-vm--201--state--Zybi 253:11 0 4.5G 0 lvm
└─pve-data_tdata 253:3 0 338.4G 0 lvm
└─pve-data-tpool 253:4 0 338.4G 0 lvm
├─pve-data 253:5 0 338.4G 1 lvm
├─pve-vm--200--disk--0 253:6 0 20G 0 lvm
├─pve-vm--100--disk--0 253:7 0 42G 0 lvm
├─pve-vm--110--disk--0 253:8 0 8G 0 lvm
├─pve-vm--201--disk--0 253:9 0 32G 0 lvm
├─pve-vm--201--state--VoiceRecognition 253:10 0 4.5G 0 lvm
└─pve-vm--201--state--Zybi 253:11 0 4.5G 0 lvm
sde 8:64 0 3.6T 0 disk
└─sde1 8:65 0 3.6T 0 part /mnt/pve/Hdd4TB
Check the config file of our container, I have it at the address
/etc/pve/nodes/proxmox/lxc/110.conf
Disk of container (sde1) physically at the address:
/mnt/pve/Hdd4TB/images/110/vm-110-disk-0.raw
I'll leave the container configuration just in case
Code:
cores: 2
features: nesting=1
hostname: FileServerTurnKey
memory: 512
mp0: Hdd4TB:110/vm-110-disk-0.raw,mp=/mnt/mydata,backup=1,size=3726G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.1.1,hwaddr=0E:A0:DD:26:8E:E5,ip=192.168.1.115/24,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-110-disk-0,size=8G
swap: 512
unprivileged: 1
If you delete the line with the mount point
mp0: Hdd4TB:110/vm-110-disk-0.raw,mp=/mnt/mydata,backup=1,size=3726G
Then everything works and starts, but the files are not visible. So there are errors on this disk after the power outage.
Check the file system (disk partitioning was a long time ago, I don't remember exactly what is there):
Code:
root@proxmox:~# file /mnt/pve/Hdd4TB/images/110/vm-110-disk-0.raw
/mnt/pve/Hdd4TB/images/110/vm-110-disk-0.raw: Linux rev 1.0 ext4 filesystem data, UUID=d7583d69-1d3a-4a65-97fc-dadca04788d0 (needs journal recovery) (extents) (64bit) (large files) (huge files)
File system ext4, great!
Run test and restore, press Y during the test if necessary:
Code:
root@proxmox:~# e2fsck -f /mnt/pve/Hdd4TB/images/110/vm-110-disk-0.raw
e2fsck 1.46.5 (30-Dec-2021)
Superblock MMP block checksum does not match. Fix<y>? yes
/mnt/pve/Hdd4TB/images/110/vm-110-disk-0.raw: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
/lost+found not found. Create<y>? yes
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (484326956, counted=484313326).
Fix<y>? yes
Free inodes count wrong (244039808, counted=244039795).
Fix<y>? yes
/mnt/pve/Hdd4TB/images/110/vm-110-disk-0.raw: ***** FILE SYSTEM WAS MODIFIED *****
/mnt/pve/Hdd4TB/images/110/vm-110-disk-0.raw: 147341/244187136 files (3.0% non-contiguous), 492435218/976748544 blocks
The container is powered off. Add a mount point to the configuration if you have not done so before:
mp0: Hdd4TB:110/vm-110-disk-0.raw,mp=/mnt/mydata,backup=1,size=3726G
Start the container, happy that everything works and all files are saved.