[SOLVED] Unable to mount zfs VM disk

blastmun

New Member
Dec 27, 2023
23
1
3
Following a mishandling on my part, I had to restore a backup of my proxmox. Except that in the meantime I created a ZFS storage to which I migrated 4 VMs (110,113,123 and 130). The problem being that following the restart, my ZFS storage was no longer visible, so I remounted it and the problem is that the disks of the VMs in question are no longer remounted and I don't see how to do it.....
Obviously they still occupy space on the ZFS storage but as you can see in the zfs list there is nothing in montpoint


Bash:
root@pve:~# zfs list

NAME                             USED  AVAIL  REFER  MOUNTPOINT

nvme_cluster                     264G   114G   120K  /nvme_cluster

nvme_cluster/subvol-120-disk-0   490M  7.52G   490M  /nvme_cluster/subvol-120-disk-0

nvme_cluster/subvol-122-disk-0  1.90G  30.1G  1.90G  /nvme_cluster/subvol-122-disk-0

nvme_cluster/subvol-140-disk-0   467M  7.54G   467M  /nvme_cluster/subvol-140-disk-0

nvme_cluster/vm-110-disk-0      50.0G   146G  17.5G  -

nvme_cluster/vm-110-disk-1      3.05M   114G    56K  -

nvme_cluster/vm-110-disk-2      32.5G   146G    56K  -

nvme_cluster/vm-113-disk-0      22.0G   114G  22.0G  -

nvme_cluster/vm-123-disk-0      86.8G   179G  21.7G  -

nvme_cluster/vm-130-disk-0      35.0G   146G  2.52G  -

nvme_cluster/vm-130-disk-1      35.0G   146G  2.54G  -
 
Last edited:
Hi,
these are zvols which are virtual block devices. They cannot be mounted, because they are not filesystems (they might contain filesystems). But they can be used by VMs as virtual disks (and filesystems on them can be mounted within the VM). What is the exact issue/error you are seeing or what do you want to achieve?
 
Hello,
thank you for your explanation!
Following the loading of my backup, the VMs in question went back to my old LVM storage (pve_nvme) while in the meantime I migrated them to ZFS storage (nvme_cluster).
For the VMs I made a mistake when modifying the config files, I forgot to remove qcow.
On the other hand I remain stuck on the CT part where I cannot remount the root storage


On CT.conf
1705573178034.png

Yet I see my images clearly
1705573246560.png

1705573862620.png


I just noticed that I can't even restore a CT on zfs pool "nvme_cluster"



1705573482098.png
 

Attachments

  • 1705571963168.png
    1705571963168.png
    8.5 KB · Views: 3
Last edited:
Hello blastmun

Bash:
root@pve:~# zfs list

NAME USED AVAIL REFER MOUNTPOINT
.....
nvme_cluster/subvol-122-disk-0 1.90G 30.1G 1.90G /nvme_cluster/subvol-122-disk-0
.....

As I can see it is filesystem in ZFS, not a block volume

ostype" debian
rootfs: nvme_cluster:vm-122-disk-0.raw,size32G
swap: 2048
unprivileged: 1

This config search for the file, not catalog.

try to remove .raw

And make sure you can access files /nvme_cluster/subvol-122-disk-0 from host
 
BTW nvme_cluster/vm-123-disk-0 you can access from host in /dev/zvol/nvme_cluster/vm-123-disk-0 or /dev/zvol/nvme_cluster/pve/vm-123-disk-0 ( depend on storage config)

From there you can mount/format virtual disk/block volume. Just do it then VM using this storage is off.
 
try to remove .raw
and it's called subvol-..., not vm-....

@blastmun please also share the storage configuration /etc/pve/storage.cfg if that doesn't help, the configuration in the backup you want to restore and the output of pveversion -v.
 
Hello blastmun



As I can see it is filesystem in ZFS, not a block volume



This config search for the file, not catalog.

try to remove .raw

And make sure you can access files /nvme_cluster/subvol-122-disk-0 from host

Thank you for your response, I didn't see the notifications.

I have good access to
/subvol-122-disk-0

I modified the 122.conf file in this way but it doesn't work, I don't have any other VM with subvols mounted to see how to write the configuration.
rootfs: nvme_cluster:/subvol-122-disk-0,size=32G

Can you help me

Code:
nano /etc/pve/lxc/122.conf

arch: amd64
cores: 4
features: fuse=1,nesting=1
hostname: NextcloudLXC
memory: 4096
mp0: CephBank:vm-122-disk-0,mp=/media/mp0/,backup=1,size=2000G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=X.X.X.X,hwaddr=X:X:X:X:X:X,ip=X.X.X.X/24,type=veth
onboot: 1
ostype: debian
rootfs: nvme_cluster:/subvol-122-disk-0,size=32G
swap: 2048
unprivileged: 1

Code:
nano /etc/pve/storage.cfg


dir: pve_nvme

        path /mnt/pve/pve_nvme

        content backup,images,snippets,iso,rootdir,vztmpl

        is_mountpoint 1

        nodes pve


zfspool: nvme_cluster

        pool nvme_cluster

        content rootdir,images

        mountpoint /nvme_cluster

        nodes pve,pvedist,pve1

        sparse 1
 
Last edited:
The / should not be here.
I removed "/".
Code:
  GNU nano 7.2                                                         /etc/pve/lxc/122.conf                                                                 
arch: amd64
cores: 4
features: fuse=1,nesting=1
hostname: NextcloudLXC
memory: 4096
mp0: CephBank:vm-122-disk-0,mp=/media/mp0/,backup=1,size=2000G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=X.X.X.X,hwaddr=X:X:X:X:X:X,ip=X.X.X.X/24,type=veth
onboot: 1
ostype: debian
rootfs: nvme_cluster:subvol-122-disk-0,size=32G
swap: 2048
unprivileged: 1

Error:

Code:
()
run_buffer: 322 Script exited with status 2
lxc_init: 844 Failed to run lxc.hook.pre-start for container "122"
__lxc_start: 2027 Failed to initialize container "122"
TASK ERROR: startup for container '122' failed
 
Please share the full output when you run pct start 122 --debug.
 
Please share the full output when you run pct start 122 --debug.
Code:
root@pve:~# pct start 122 --debug
run_buffer: 322 Script exited with status 2
lxc_init: 844 Failed to run lxc.hook.pre-start for container "122"
__lxc_start: 2027 Failed to initialize container "122"
id 0 hostid 100000 range 65536
INFO     lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 - Initialized LSM security driver AppArmor
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook" for container "122", config section "lxc"
DEBUG    conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 122 lxc pre-start produced output: cannot open directory //nvme_cluster/subvol-122-disk-0: No such file or directory

ERROR    conf - ../src/lxc/conf.c:run_buffer:322 - Script exited with status 2
ERROR    start - ../src/lxc/start.c:lxc_init:844 - Failed to run lxc.hook.pre-start for container "122"
ERROR    start - ../src/lxc/start.c:__lxc_start:2027 - Failed to initialize container "122"
INFO     conf - ../src/lxc/conf.c:run_script_argv:338 - Executing script "/usr/share/lxc/hooks/lxc-pve-poststop-hook" for container "122", config section "lxc"
startup for container '122' failed

If I don't say anything stupid my subvol is directly in /


Code:
root@pve:~# ls -l /subvol-1
subvol-120-disk-0/ subvol-122-disk-0/ subvol-140-disk-0/


root@pve:~# ls -l /nvme_cluster/
total 0

Code:
root@pve:~# zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
nvme_cluster                     288G  89.4G   128K  /
nvme_cluster/subvol-120-disk-0   490M  7.52G   490M  /subvol-120-disk-0
nvme_cluster/subvol-122-disk-0  1.90G  30.1G  1.90G  /subvol-122-disk-0
nvme_cluster/subvol-140-disk-0   467M  7.54G   467M  /subvol-140-disk-0
 
Last edited:
Code:
root@pve:~# zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
nvme_cluster                     288G  89.4G   128K  /
nvme_cluster/subvol-120-disk-0   490M  7.52G   490M  /subvol-120-disk-0
nvme_cluster/subvol-122-disk-0  1.90G  30.1G  1.90G  /subvol-122-disk-0
nvme_cluster/subvol-140-disk-0   467M  7.54G   467M  /subvol-140-disk-0
Code:
nano /etc/pve/storage.cfg

zfspool: nvme_cluster

        pool nvme_cluster

        content rootdir,images

        mountpoint /nvme_cluster

        nodes pve,pvedist,pve1

        sparse 1
The mountpoint in ZFS doesn't match the one in the storage configuration. If this is actually your root file system, adapt the storage configuration, otherwise, the other way around.
 
The mountpoint in ZFS doesn't match the one in the storage configuration. If this is actually your root file system, adapt the storage configuration, otherwise, the other way around.
I have already tried to make modifications but without success, can you give me an example?
 
Code:
Code:
root@pve:~# zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
nvme_cluster                     288G  89.4G   128K  /
nvme_cluster/subvol-120-disk-0   490M  7.52G   490M  /subvol-120-disk-0
nvme_cluster/subvol-122-disk-0  1.90G  30.1G  1.90G  /subvol-122-disk-0
nvme_cluster/subvol-140-disk-0   467M  7.54G   467M  /subvol-140-disk-0

I see nvme_cluster mount point is /

Doesn`t it overlaps with rpool ? Or your OS is in nvme_cluster ?


Bash:
Code:
root@pve:~# zfs list

NAME                             USED  AVAIL  REFER  MOUNTPOINT

nvme_cluster                     264G   114G   120K  /nvme_cluster


I think this is how it should be.


I see you are doing small mistakes in every step.
 
I see nvme_cluster mount point is /

Doesn`t it overlaps with rpool ? Or your OS is in nvme_cluster ?





I think this is how it should be.


I see you are doing small mistakes in every step.
what OS? Proxmox?
I don't understand this inconsistency either, how can I fix it?
 
Lets begin to fix it from the ZFS point.

Is your nvme_cluster ZFS pool dedicated to VM data? Then lets set his mountpoint to /nvme_cluste or /media/nvme_cluste as I do.

#zfs set mountpoint=/nvme_cluste nvme_cluste
 
Same issue:
When i start the CT:
Code:
()
TASK ERROR: unable to parse zfs volume name 'nvme_cluster/subvol-122-disk-0'

Code:
root@pve:~# ls -l /nvme_cluster
total 27
drwxr-xr-x  2 root   root    2 Jan 18 00:00 dev
drwxr-xr-x  3 root   root    3 Jan 18 08:56 run
drwxr-xr-x 18 100000 100000 24 Jan 17 15:34 subvol-120-disk-0
drwxr-xr-x 19 100000 100000 25 Jan 15 21:56 subvol-122-disk-0
drwxr-xr-x 18 100000 100000 24 Jan 15 21:55 subvol-140-disk-0
root@pve:~#

Code:
  GNU nano 7.2                                         /etc/pve/lxc/122.conf
arch: amd64
cores: 4
features: fuse=1,nesting=1
hostname: NextcloudLXC
memory: 4096
mp0: CephBank:vm-122-disk-0,mp=/media/mp0/,backup=1,size=2000G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=x.x.x.x,hwaddr=x:x:x:x:x:x,ip=x.x.x.x/24,type=veth
onboot: 1
ostype: debian
rootfs: nvme_cluster:nvme_cluster/subvol-122-disk-0,size=32G
swap: 2048
unprivileged: 1
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!