[SOLVED] Unable to mount zfs VM disk

Code:
rootfs: nvme_cluster:nvme_cluster/subvol-122-disk-0,size=32G
It should be
nvme_cluster:subvol-122-disk-0

The storage configuration says that the storage with ID nvme_cluster is mounted at /nvme_cluster. The volume name then indicates how the volume is named on the storage, it's not the path.
 
Now the CT loads in a loop at startup and when I want to stop it with a CT 122 stop

Code:
root@pve:~# pct stop 122
trying to acquire lock...
can't lock file '/run/lock/lxc/pve-config-122.lock' - got timeout
 
You can check with fuser -vau /run/lock/lxc/pve-config-122.lock what is currently holding the lock. What exactly do you see in the loop? If you do manage to stop it, you can try again with pct start 122 --debug to get more information.
 
You can check with fuser -vau /run/lock/lxc/pve-config-122.lock what is currently holding the lock. What exactly do you see in the loop? If you do manage to stop it, you can try again with pct start 122 --debug to get more information.

Code:
root@pve:~# pct start 122 --debug
run_buffer: 322 Script exited with status 110
lxc_init: 844 Failed to run lxc.hook.pre-start for container "122"
__lxc_start: 2027 Failed to initialize container "122"
0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "122", config section "lxc"
DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 122 lxc pre-start produced output: rbd error: rbd: couldn't connect to the cluster!

ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 110
ERROR    start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "122"
ERROR    start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "122"
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "122", config section "lxc"
startup for container '122' failed
root@pve:~#
 
Code:
DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 122 lxc pre-start produced output: rbd error: rbd: couldn't connect to the cluster!
Please check the RBD storage configuration and whether the Ceph cluster is up and running and you can access it.
 
Code:
Code:
nano /etc/pve/storage.cfg


dir: pve_nvme

        path /mnt/pve/pve_nvme

        content backup,images,snippets,iso,rootdir,vztmpl

        is_mountpoint 1

        nodes pve


zfspool: nvme_cluster

        pool nvme_cluster

        content rootdir,images

        mountpoint /nvme_cluster

        nodes pve,pvedist,pve1

        sparse 1

Looking to your storage config your ZFS nvme_cluster pool must have mount point to /nvme_cluster

Code:
Code:
  GNU nano 7.2                                         /etc/pve/lxc/122.conf
arch: amd64
cores: 4
features: fuse=1,nesting=1
hostname: NextcloudLXC
memory: 4096
mp0: CephBank:vm-122-disk-0,mp=/media/mp0/,backup=1,size=2000G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=x.x.x.x,hwaddr=x:x:x:x:x:x,ip=x.x.x.x/24,type=veth
onboot: 1
ostype: debian
rootfs: nvme_cluster:nvme_cluster/subvol-122-disk-0,size=32G
swap: 2048
unprivileged: 1

rootfs should be: nvme_cluster:subvol-122-disk-0,size=32G
 
Looking to your storage config your ZFS nvme_cluster pool must have mount point to /nvme_cluster



rootfs should be: nvme_cluster:subvol-122-disk-0,size=32G
Yes
Code:
zfspool: nvme_cluster
        pool nvme_cluster
        content rootdir,images
        mountpoint /nvme_cluster
        nodes pve,pvedist,pve1
        sparse 1

Yes
rootfs should be: nvme_cluster:subvol-122-disk-0,size=32G
My last problem was that I had also migrated the storage part to something other than CEPH because I only have 4 nodes and ultimately 3. Given the sensitivity of the stored data, I will rather leave it on a ZFS pool with replication every 3/6 hours.

Thank you very much for your explanations, it allowed me to better understand this storage management part. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!