Creating CT on ZFS storage failed!

Borut

Well-Known Member
May 16, 2018
39
0
46
70
I successfully created several VM's on ZFS storage while creating CT on same ZFS storage failed!

root@starspot:/home/borut# zfs list | grep wpool
wpool 297G 2.88T 192K none
wpool/cts 192K 2.88T 192K /cts
wpool/home 424K 2.88T 192K /home
wpool/home/borut 232K 2.88T 232K /home/borut
wpool/vms 297G 2.88T 192K /vms
wpool/vms/vm-100-disk-1 99.0G 2.96T 8.33G -
wpool/vms/vm-101-disk-1 99.0G 2.96T 9.98G -
wpool/vms/vm-102-disk-1 99.0G 2.96T 8.94G -

Create: LXC Container -> Root Disk:
Storage: vm
Disk size (GB): 32

mounting container failed
TASK ERROR: cannot open directory //wpool: No such file or directory
Screen Shot 2018-05-24 at 16.22.20.png

Screen Shot 2018-05-24 at 16.24.28.png


When creating VM on the same storage as CT and CT failed is this a bug. Am I wrong?
 
How does your /etc/pve/storage.cfg look like?
 
root@starspot:~# more /etc/pve/storage.cfg
dir: local
path /var/lib/vz

content vztmpl,backup,iso

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfspool: vm
pool wpool/vms
content images,rootdir
sparse 0

zfspool: ct
pool wpool/cts
content rootdir,images
sparse 0

root@starspot:~#
 
Upgrade to VPE to 5.2-1 didn't help:

Creating a new CT:

mounting container failed
TASK ERROR: cannot open directory //wpool: No such file or directory
 
Please post your 'pveversion -v' and complete 'zfs list'. I tried to replicate this, but it works w/o issue.
 
root@starspot:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9

root@starspot:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
archive_pool 215G 91.3T 279K none
archive_pool/archive 215G 91.3T 326K /archive
archive_pool/archive/sdo 215G 91.3T 215G /archive/sdo
archive_pool/archive/soho 1.14M 91.3T 326K /archive/soho
archive_pool/archive/soho/eit 279K 91.3T 279K /archive/soho/eit
archive_pool/archive/soho/lasco 279K 91.3T 279K /archive/soho/lasco
archive_pool/archive/soho/sumer 279K 91.3T 279K /archive/soho/sumer
archive_pool/archive/stereo 1.41M 91.3T 326K /archive/stereo
archive_pool/archive/stereo/impact 279K 91.3T 279K /archive/stereo/impact
archive_pool/archive/stereo/plastic 279K 91.3T 279K /archive/stereo/plastic
archive_pool/archive/stereo/position 279K 91.3T 279K /archive/stereo/position
archive_pool/archive/stereo/secchi 279K 91.3T 279K /archive/stereo/secchi
archive_pool/solar 279K 91.3T 279K /solar
rpool 32.1G 183G 96K /rpool
rpool/ROOT 10.3G 183G 96K /rpool/ROOT
rpool/ROOT/pve-1 10.3G 183G 10.3G /
rpool/data 13.2G 183G 12.6G /rpool/data
rpool/data/subvol-103-disk-1 235M 31.8G 235M /rpool/data/subvol-103-disk-1
rpool/data/subvol-104-disk-1 393M 31.6G 393M /rpool/data/subvol-104-disk-1
rpool/swap 8.50G 184G 7.33G -
wpool 297G 2.88T 192K none
wpool/cts 192K 2.88T 192K /cts
wpool/home 424K 2.88T 192K /home
wpool/home/borut 232K 2.88T 232K /home/borut
wpool/vms 297G 2.88T 192K /vms
wpool/vms/vm-100-disk-1 99.0G 2.96T 8.33G -
wpool/vms/vm-101-disk-1 99.0G 2.96T 10.2G -
wpool/vms/vm-102-disk-1 99.0G 2.96T 8.95G -
root@starspot:~#
 
wpool/cts 192K 2.88T 192K /cts
wpool/vms 297G 2.88T 192K /vms
The mount points are changed from its default (/wpool/cts), this is not where the storage is looking for, when creating a subvol. For VMs it doesn't matter, as they are on zvol and no mountpoint is needed.
 
O.K. so this is a bug then! Adding a new storage with a choice of:
Screen Shot 2018-05-25 at 10.54.25.png
Selecting wpool/cts is a valid choice, which later produce:
TASK ERROR: cannot open directory //wpool: No such file or directory
 
That view doesn't list the mountpoint, as the pool is mounted under the same naming, if not changed by hand. In your case it is not the default, hence the creation of a subvol is not working.
 
What is a solution if I would like to keep CT's on wpool/ct or wpool/cts?
 
Set mountpoint. PROXMOX do not look at current mountpoint of ZFS volume. It have static build in settings.

example: if you want to use pool zfs_pool/vm_volume it must be mountpoint to /zfs_pool/vm_volume/ not like /my_vm_directory/
 
Does it mean I shouldn't create ZFS by myself (because PVE will not know about it)?
 
Does it mean I shouldn't create ZFS by myself (because PVE will not know about it)?
You can create as many zfs datasets as you like, but you should not change the default mount path.
 
I didn't change the default mount path, I didn't set mountpoint /wpool! Now is everything O.K.
Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!