PVE can't move/create container on ZFS (VMs work fine)

levifig

Active Member
Feb 13, 2018
13
0
41
levifig.com
Hey guys,

I'm at a loss here. I recently moved from other virtualization solutions to Proxmox and have been pretty happy. Glad I made the switch… :)

One thing that has got me stumped is this: a few days a go I got 2 enterprise-grade SSDs for VM storage. I was using a "consumer" SSD for that, and it was working fine, but I wanted some safety. I installed the 2 SSDs, got them configured in a mirror with ZFS. To begin, and because I needed to remove the other SSD to be able to install the 2 new ones, I moved all the VMs and containers to a spinning drive (worked fine). After removing the other SSD and getting the new ones up and running, I started moving the VMs and containers to the new storage. I added the new pool (volData) to PVE and configured it with "thin provision". The VMs moved fine (as zvols, which is nice) but I could not move the containers. It kept telling me: TASK ERROR: cannot open directory //volData: No such file or directory. I tried creating a dataset (volData/pve) and add that as an additional storage for containers and that gave me the same exact error message. I'm really confused as to what is going on here. My next step would be to manually create a zvol, format it as ext4 and add that as a directory in PVE. I'm sure that'll work, but I'm really confused as to why adding the regular ZFS pool or dataset isn't working… I read somewhere that using directories in ZFS and not ZFS "proper" is not really the best solution.

Also, why doesn't PVE allow me to add a ZFS dataset and set it for backups? Why just images and containers? :(

Thanks for the help.
 
Hi,

Do you use custom mountpoint with ZFS?

please send the output of
Code:
zfs get mountpoint,mounted -r volData
 
if new disks are the same size or larger you could have just use replace option or detach/attach option

I am not sure if you can do this if new disks are smaller then existing , but if they are the same size or bigger you could
do "zpool replace <yourPoolName> <oldDisk> <newDisk> "

OR "zpool detach <yourPoolName> <oldDisk>"
swipe the disk for new one and do "zpool attach <yourPoolName> <newDisk>"
it helps if you can install replacement disk in the system first.

could have been faster than doing all the moving around :)
 
swipe the disk for new one and do "zpool attach <yourPoolName> <newDisk>"
it helps if you can install replacement disk in the system first.

could have been faster than doing all the moving around :)

Like I mentioned, I couldn't install the new disks without removing the original, hence moving them to a temporary drive I didn't plan to remove, and then adding the new disks, creating the mirror pool, and going from there... ;)
 
Like I mentioned, I couldn't install the new disks without removing the original, hence moving them to a temporary drive I didn't plan to remove, and then adding the new disks, creating the mirror pool, and going from there... ;)
well if you did them one at a time it would worked too.
i.e

detach disk 2
attach new disk 2

wait for re-silver

detach disk 1
attache new disk 1

just saying....
 
  • Like
Reactions: levifig
well if you did them one at a time it would worked too.
i.e

detach disk 2
attach new disk 2

wait for re-silver

detach disk 1
attache new disk 1

just saying....


This is good information, but doesn't help my problem at all :) What I did was probably faster than this too, for the number of VMs/containers I have ATM! ;)
 
Even when trying to add the ZFS volume mount point as a directory and writing the image file to it, as regular raw file, it fails with this error message:

Code:
Formatting '/mnt/volData/images/120/vm-120-disk-1.raw', fmt=raw size=34359738368
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks:    4096/8388608               done                          
Creating filesystem with 8388608 4k blocks and 2097152 inodes
Filesystem UUID: 1446ebbf-bdbc-425b-8a26-1a9d3bad73f7
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624

Allocating group tables:   0/256       done                          
Writing inode tables:   0/256       done                          
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information:   0/256     
Warning, had trouble writing out superblocks.
TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /mnt/volData/images/120/vm-120-disk-1.raw' failed: exit code 144

Completely at loss here… :( It worked fine for VMs with thin provisioning, but containers refuse to work. It creates the folder structure in the directory just fine too, just can't create the RAW files for whatever reason… :\
 
Yes, it makes a difference if you use custom Mountpoints.
Please use the default settings.
 
Is a fix for this in the works? Thank you.
There is no fix because it is no bug.
Our framework expects the container at a specific path.

That is that same if you would rename you nic in a non predictable network interface name scheme like "foobar".
PVE network parser expects a predictable network interface name or eth<x>.
 
There is no fix because it is no bug.
Our framework expects the container at a specific path.

That is that same if you would rename you nic in a non predictable network interface name scheme like "foobar".
PVE network parser expects a predictable network interface name or eth<x>.

I understand. But it's not "non predictable". It's in the ZFS configuration. Also, we could add that to the storage configuration options. FWIW, VM disks work perfectly fine in this model, only LXC fails…

I'm not presuming to know more than you on this, obviously. If nothing else, a more clear error message would help and save everyone a lot of time… :) "We couldn't find `volData` at `/volData`. If you've set a custom mount point to this ZFS pool please set it back to its default mount point (i.e. /<pool_name>) in order be able to use it with PVE." (too verbose, I know, but you get the point) ;)

Thank you again for the help. o/
 
Greetings!

I ran into this issue yesterday evening and wasted 1+ hours finding out about that behavior.
Would be nice if that information could be at least be mentioned in the Proxmox-ZFS Docs.
Something like that:
The ZFS mountpoints MUST NOT be changed. Proxmox will assume them to be placed in the root of the filesystem.

I personally dislike having mount-points in the root of the FS - but if that's how it has to be - I'm fine (;
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!