PVE can't move/create container on ZFS (VMs work fine)

Discussion in 'Proxmox VE: Installation and configuration' started by levifig, Jun 15, 2018 at 05:12.

  1. levifig

    levifig New Member

    Joined:
    Feb 13, 2018
    Messages:
    6
    Likes Received:
    0
    Hey guys,

    I'm at a loss here. I recently moved from other virtualization solutions to Proxmox and have been pretty happy. Glad I made the switch… :)

    One thing that has got me stumped is this: a few days a go I got 2 enterprise-grade SSDs for VM storage. I was using a "consumer" SSD for that, and it was working fine, but I wanted some safety. I installed the 2 SSDs, got them configured in a mirror with ZFS. To begin, and because I needed to remove the other SSD to be able to install the 2 new ones, I moved all the VMs and containers to a spinning drive (worked fine). After removing the other SSD and getting the new ones up and running, I started moving the VMs and containers to the new storage. I added the new pool (volData) to PVE and configured it with "thin provision". The VMs moved fine (as zvols, which is nice) but I could not move the containers. It kept telling me: TASK ERROR: cannot open directory //volData: No such file or directory. I tried creating a dataset (volData/pve) and add that as an additional storage for containers and that gave me the same exact error message. I'm really confused as to what is going on here. My next step would be to manually create a zvol, format it as ext4 and add that as a directory in PVE. I'm sure that'll work, but I'm really confused as to why adding the regular ZFS pool or dataset isn't working… I read somewhere that using directories in ZFS and not ZFS "proper" is not really the best solution.

    Also, why doesn't PVE allow me to add a ZFS dataset and set it for backups? Why just images and containers? :(

    Thanks for the help.
     
  2. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    3,356
    Likes Received:
    191
    Hi,

    Do you use custom mountpoint with ZFS?

    please send the output of
    Code:
    zfs get mountpoint,mounted -r volData
    
     
  3. jim.bond.9862

    jim.bond.9862 Member

    Joined:
    Apr 17, 2015
    Messages:
    203
    Likes Received:
    19
    if new disks are the same size or larger you could have just use replace option or detach/attach option

    I am not sure if you can do this if new disks are smaller then existing , but if they are the same size or bigger you could
    do "zpool replace <yourPoolName> <oldDisk> <newDisk> "

    OR "zpool detach <yourPoolName> <oldDisk>"
    swipe the disk for new one and do "zpool attach <yourPoolName> <newDisk>"
    it helps if you can install replacement disk in the system first.

    could have been faster than doing all the moving around :)
     
  4. levifig

    levifig New Member

    Joined:
    Feb 13, 2018
    Messages:
    6
    Likes Received:
    0
    Yes, custom mount point (/mnt/<pool_name>, i.e. /mnt/volData). Does that make a difference? :eek:
     
  5. levifig

    levifig New Member

    Joined:
    Feb 13, 2018
    Messages:
    6
    Likes Received:
    0
    Like I mentioned, I couldn't install the new disks without removing the original, hence moving them to a temporary drive I didn't plan to remove, and then adding the new disks, creating the mirror pool, and going from there... ;)
     
  6. jim.bond.9862

    jim.bond.9862 Member

    Joined:
    Apr 17, 2015
    Messages:
    203
    Likes Received:
    19
    well if you did them one at a time it would worked too.
    i.e

    detach disk 2
    attach new disk 2

    wait for re-silver

    detach disk 1
    attache new disk 1

    just saying....
     
    levifig likes this.
  7. levifig

    levifig New Member

    Joined:
    Feb 13, 2018
    Messages:
    6
    Likes Received:
    0

    This is good information, but doesn't help my problem at all :) What I did was probably faster than this too, for the number of VMs/containers I have ATM! ;)
     
  8. levifig

    levifig New Member

    Joined:
    Feb 13, 2018
    Messages:
    6
    Likes Received:
    0
    Even when trying to add the ZFS volume mount point as a directory and writing the image file to it, as regular raw file, it fails with this error message:

    Code:
    Formatting '/mnt/volData/images/120/vm-120-disk-1.raw', fmt=raw size=34359738368
    mke2fs 1.43.4 (31-Jan-2017)
    Discarding device blocks:    4096/8388608               done                          
    Creating filesystem with 8388608 4k blocks and 2097152 inodes
    Filesystem UUID: 1446ebbf-bdbc-425b-8a26-1a9d3bad73f7
    Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624
    
    Allocating group tables:   0/256       done                          
    Writing inode tables:   0/256       done                          
    Creating journal (65536 blocks): done
    Multiple mount protection is enabled with update interval 5 seconds.
    Writing superblocks and filesystem accounting information:   0/256     
    Warning, had trouble writing out superblocks.
    TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /mnt/volData/images/120/vm-120-disk-1.raw' failed: exit code 144
    Completely at loss here… :( It worked fine for VMs with thin provisioning, but containers refuse to work. It creates the folder structure in the directory just fine too, just can't create the RAW files for whatever reason… :\
     
  9. wolfgang

    wolfgang Proxmox Staff Member
    Staff Member

    Joined:
    Oct 1, 2014
    Messages:
    3,356
    Likes Received:
    191
    Yes, it makes a difference if you use custom Mountpoints.
    Please use the default settings.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice