Mounted ZFS is empty. How can I get access to data on a mounted ZFS disk?

Joogser

Member
Jun 30, 2019
25
4
23
39
Russian Federation
habr.com
Hello Colleagues.

I have Proxmox VE 6.0-4 without subscription. Also on a server installed one single SSD disk (this disk has nvme0n1 tag). This disk I'll be use in ZFS file system.

Mounted ZFS is empty. How can I get access to data on a mounted ZFS disk?

Here is additional information about my ZFS configuration:

Code:
fdisk /dev/nvme0n1
далее
Command (m for help): g
Created a new GPT disklabel (GUID: 60CB2AB1-4C8F-224D-B2EC-AE41FD037C01).

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

So, made ZFS by this command and gave a name rpool for this file system:
Code:
zpool create rpool /dev/nvme0n1

Automatically this disk is mounted with name /rpool in the root:

mc-jpg.10989


Code:
root@node1:~# zpool status -v
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          nvme0n1   ONLINE       0     0     0

Code:
root@node1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   63G     0   63G   0% /dev
tmpfs                  13G  9.5M   13G   1% /run
/dev/mapper/pve-root   15G  2.7G   12G  20% /
tmpfs                  63G   43M   63G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                  63G     0   63G   0% /sys/fs/cgroup
/dev/fuse              30M   16K   30M   1% /etc/pve
tmpfs                  13G     0   13G   0% /run/user/0
/dev/sda1             466G   22G  445G   5% /mnt/little_angel
rpool                 923G  128K  923G   1% /rpool

Code:
root@node1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: Little_Angel
        path /mnt/little_angel
        content backup
        maxfiles 1
        nodes node1
        shared 0

zfspool: zfs_disk
        pool rpool
        content images,rootdir
        nodes node1
        sparse 1

I did restore VM on this zfs_disk:

vm-jpg.10988


Went to this mounted ZFS disk, via opening this in Midnight Commander. I can see nothing! Can't understand how I may be able see, touch and work with contained files in this disk which were made\generated by Proxmox VE?

mc2-jpg.10990


cli-jpg.10992


prox-jpg.10991


Where I am wrong?

Thanks.
 

Attachments

  • vm.jpg
    vm.jpg
    129.4 KB · Views: 212
  • mc.jpg
    mc.jpg
    220.2 KB · Views: 218
  • mc2.jpg
    mc2.jpg
    60 KB · Views: 207
  • prox.jpg
    prox.jpg
    85.9 KB · Views: 207
  • cli.jpg
    cli.jpg
    17.1 KB · Views: 205
Last edited:
What's the output of "zfs list"?

Background: VM disks on ZFS storage do not use a file but a "zvol". zvols are block devices that do not show up if you go into the directory where the pool ist mounted. Containers on the other hand use zfs datasets which are regular file systems that can be seen.

On a last note: I would not name an additional pool "rpool" as this is the name given to the pool if you install Proxmox VE on a ZFS root. It might bite you in the future and will definitely cause some confusion if you have other people helping you out. You might want to rename it :)
 
On a last note: I would not name an additional pool "rpool" as this is the name given to the pool if you install Proxmox VE on a ZFS root. It might bite you in the future and will definitely cause some confusion if you have other people helping you out. You might want to rename it :)

Okay, to remove ZFS pool with name rpool made next command:
Code:
root@node1:/# zpool destroy rpool
root@node1:/# zfs list
no datasets available

Create new one with name TEST:
Code:
zpool create TEST /dev/nvme0n1

create_zfs-jpg.10993


add_zfs-jpg.10994


restore_on_zfs-jpg.10995


What's the output of "zfs list"?

Background: VM disks on ZFS storage do not use a file but a "zvol". zvols are block devices that do not show up if you go into the directory where the pool ist mounted. Containers on the other hand use zfs datasets which are regular file systems that can be seen.

zfs list:

zfs_list-jpg.10996


zvols are block devices that do not show up if you go into the directory where the pool ist mounted. Containers on the other hand use zfs datasets which are regular file systems that can be seen.
Understood
 

Attachments

  • create_zfs.jpg
    create_zfs.jpg
    97.6 KB · Views: 195
  • add_zfs.jpg
    add_zfs.jpg
    32.4 KB · Views: 192
  • restore_on_zfs.jpg
    restore_on_zfs.jpg
    39.5 KB · Views: 194
  • zfs_list.jpg
    zfs_list.jpg
    43.4 KB · Views: 192
Last edited:
I did make this question because I need import Virtual Machine from VMWare ESXi to ProXmoX.

At the first step I did use VMware vCenter Converter Standalone Client to crate image of VM

standalone-jpg.10997


After this I have made copy Proxy_Debian_Production.vmdk to Proxmox, concretely to additional drive for back_ups which is mounted to /mnt/little_angel/.

And made convert to RAW format:
Code:
qemu-img convert Proxy_Debian_Production.vmdk debian.raw

convert-jpg.10998


So, now need create VM for this restore:

vm_for_restore-jpg.10999


vm_for_restore2-jpg.11005


Now must make direct copy this raw hdd to VM with id=101, also need rename this hdd like in template of this VM. Lets begin:

Code:
dd if=debian.raw of=/TEST/vm-101-disk-0

dd-jpg.11001


fail-jpg.11003

  1. TEST - this is our ZFS disk, this VM hdd storing in a root?
  2. Which path I need use for copy?
  3. Also, if I have made wrong path copy, which way I should use for removing this wrong copy, for economy disk capacity?
    if

    zvols are block devices that do not show up if you go into the directory where the pool ist mounted. Containers on the other hand use zfs datasets which are regular file systems that can be seen.
  4. Any other thoughts about this VMWare VM restoring?
 

Attachments

  • standalone.jpg
    standalone.jpg
    465.7 KB · Views: 173
  • convert.jpg
    convert.jpg
    68.9 KB · Views: 169
  • vm_for_restore.jpg
    vm_for_restore.jpg
    41 KB · Views: 167
  • dd.jpg
    dd.jpg
    53.3 KB · Views: 165
  • image.png
    image.png
    110.2 KB · Views: 2
  • fail.jpg
    fail.jpg
    122.8 KB · Views: 163
  • vm_for_restore2.jpg
    vm_for_restore2.jpg
    176.1 KB · Views: 164
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!