pve-zsync: Access snapshots

helojunkie

Well-Known Member
Jul 28, 2017
69
1
48
56
San Diego, CA
I have a system running 5.2-1 (latest updates).

I have 2 x 512GB SSDs in a ZFS RAID1 configuration and 2 x 8TB Spinners in a ZFS RAID1 configuration. The SSD are my rpool and my 8TBs are setup as "spinners".

I am using pve-zsync to snapshot and sync those snapshots between my SSDs and my spinners.

I have created a pve-zsync process for two of my VMs and they appear to be running fine:

Code:
root@proxmox:~# pve-zsync list
SOURCE                   NAME                     STATE     LAST SYNC           TYPE  CON 
101                      Win10PRO                 ok        2018-06-14_19:15:01 qemu  local
102                      Gatekeeper               ok        2018-06-14_19:15:03 qemu  local


root@proxmox:~# pve-zsync status
SOURCE                   NAME                     STATUS  
101                      Win10PRO                 ok
102                      Gatekeeper               ok

I can see the snapshots on both my rpool (SSDs) and my spinners:

Code:
root@proxmox:~# zfs list -t snapshot
NAME                                                                USED  AVAIL  REFER  MOUNTPOINT
rpool/ssd_images/vm-101-disk-1@rep_Win10PRO_2018-06-14_19:00:01    4.84M      -  31.7G  -
rpool/ssd_images/vm-101-disk-1@rep_Win10PRO_2018-06-14_19:15:01    1.54M      -  31.7G  -
rpool/ssd_images/vm-102-disk-1@rep_Gatekeeper_2018-06-14_19:00:03   328K      -  10.8G  -
rpool/ssd_images/vm-102-disk-1@rep_Gatekeeper_2018-06-14_19:15:03   308K      -  10.8G  -
spinning/backups/vm-101-disk-1@rep_Win10PRO_2018-06-14_19:00:01    8.68M      -  31.5G  -
spinning/backups/vm-101-disk-1@rep_Win10PRO_2018-06-14_19:15:01       0B      -  31.5G  -
spinning/backups/vm-102-disk-1@rep_Gatekeeper_2018-06-14_19:00:03  1.64M      -  10.7G  -
spinning/backups/vm-102-disk-1@rep_Gatekeeper_2018-06-14_19:15:03     0B      -  10.7G  -

I can see the space actually used on my spinner:

Code:
root@proxmox:~# zpool list
NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool      476G   160G   316G         -    48%    33%  1.00x  ONLINE  -
spinning  7.25T  46.6G  7.20T         -     0%     0%  1.00x  ONLINE  -

Both of my pools have list snapshots on:

Code:
root@proxmox:~# zpool get listsnapshots rpool
NAME   PROPERTY       VALUE      SOURCE
rpool  listsnapshots  on         local

root@proxmox:~# zpool get listsnapshots spinning
NAME      PROPERTY       VALUE      SOURCE
spinning  listsnapshots  on         local

I have set my zfs snapdir to visible:

Code:
root@proxmox:~# zfs get snapdir rpool
NAME   PROPERTY  VALUE    SOURCE
rpool  snapdir   visible  local

root@proxmox:~# zfs get snapdir spinning
NAME      PROPERTY  VALUE    SOURCE
spinning  snapdir   visible  local


After having done all of this, I am unable to locate my snapshots nor access them as I would have expected in the .zfs directory:

Code:
root@proxmox:~# locate .zfs | grep snapshot
/.zfs/snapshot
/rpool/.zfs/snapshot
/rpool/ROOT/.zfs/snapshot
/rpool/data/.zfs/snapshot
/rpool/ssd_images/.zfs/snapshot
/spinning/.zfs/snapshot
/spinning/backups/.zfs/snapshot

An ls -alh on each of those directors show nothing at all.


So my question is how to I access those snapshots? I run a bunch of different freenas ZFS fileservers and I am used to just being able to cd into the snapshot directory and looking at the files and recovering them (or whatever). However on Proxmox, although everything is ZFS I cannot seem to locate (much less browse) my snapshots.

Any help would be greatly appreciated.

Thanks
 
Hi,

As you use KVM PVE use zvols as image and you need the property snapdev and not snapdir.
Then the blockdev snapshot is shown under /dev/zfs/<pool>/...
 
Thank you Wolfgang -

But that didn't seem to help:

Code:
NAME      PROPERTY  VALUE    SOURCE
spinning  snapdev   visible  local

But there is no /dez/zfs or /dev/.zfs directory in which to find the snapshots.

To be clear, I am trying to find the snapshots on the spinning pool which is on a separate set of ZFS RAID1 hard drives. /dev would put them on my rpool.

Thanks
 
Sorry, my failure.
the path is /dev/zvol/<pool>
 
Thanks, I see a link to the snapshot there but I cannot gain access to the snapshot. I am thinking maybe I am not understanding how Prox does snapshots. On other ZFS systems, I can cd into the actual snapshot directory and actually see the files, etc in the directory.

My goal is to snapshot my VMs and then replicate those snapshots to another pool on the exact same system and then be able to access those snapshots on the other pool to gain access to the snapshots.

My goal in this is to get the snapshots off the primary drives they are on in the event these drives fail.

Maybe I am going about it the wrong way. I am thinking the builtin Prox snapshots might not be actual ZFS snapshots, is this the case?

Thanks
 
Thanks, I see a link to the snapshot there but I cannot gain access to the snapshot. I am thinking maybe I am not understanding how Prox does snapshots. On other ZFS systems, I can cd into the actual snapshot directory and actually see the files, etc in the directory.
Hi,
what kind of file do you expect to see if you looking on an hdd-clone?
You have an device with partitiontable and mbr (if this used inside the vm). It's an view of an blockdevice!

Udo
 
Hi Udo -

When I go to one of my ZFS filesystems (in my case, freenas), when I run a snapshot, I can check into that snapshot directory, see every file, every directory, etc and grab an individual file or group of files and copy them wherever I want them without having to clone the snapshot. This is true on both the local copy of the snapshot and the replicated copy of the snapshot.

That is what I am looking for with my Proxmax snapshots (which are being stored on my ZFS filesystem).

I am starting to assume that Prozmox snapshots (run from within proxmox gui itself) is not a traditional snapshot like I am used to dealing with and therefore is not really designed for what I need. I need to be able to run a snapshot, replicate that snapshot to another pool (or another system 500 miles away) and have access to the entire file structure of that snapshot without have to rollback.

I have attempted to run a snapshot and then use pve-zsync to replicate that snapshot and then gain access to it.

FWIW - when I run a tool such as zfs-auto-snap, I have the access I am looking for with my snapshots. I was just looking for the same functionality with the builtin snapshot tool in Proxmox.
 
Hi Udo -

When I go to one of my ZFS filesystems (in my case, freenas), when I run a snapshot, I can check into that snapshot directory, see every file, every directory, etc and grab an individual file or group of files and copy them wherever I want them without having to clone the snapshot. This is true on both the local copy of the snapshot and the replicated copy of the snapshot.
Hi,
if you look at an snapshot from an filesystem, you will see of course the content of the filesystem.

If you look at an snapshot of an blockdevice, you see... an blockdevice!

You can work with kpartx to get device-files for this blockdevice and than you can mount this and than you can see files!

Udo
 
  • Like
Reactions: dbayer
Udo -

I guess I am missing something. When I go into Proxmox and click snapshot, is that a block device snapshot or filesystem snapshot and how do I select between the two so I can use the builtin prox tools? When I do a snapshot from the commandline, it works as I would expect - I have access to my filesystem and files directly. This is what I am trying to accomplish within proxmox.

Thanks for the help!
 
Udo -

I guess I am missing something. When I go into Proxmox and click snapshot, is that a block device snapshot or filesystem snapshot and how do I select between the two so I can use the builtin prox tools? When I do a snapshot from the commandline, it works as I would expect - I have access to my filesystem and files directly. This is what I am trying to accomplish within proxmox.

Thanks for the help!
Hi,
your sanpshot are named spinning/backups/vm-101-disk-1@rep_Win10PRO_2018-06-14_19:00:01
So it's an snapshot (view) of an blockdevice, e.g. first disk of vm 101 from 2018-06-14_19:00:01.

You don't get other snapshots if you do this on the commandline.

Look here, perhaps it's more clearer what I mean.
In this case I worke on an normal stopped vm-disk, because I havn't zfs at home yet (but it's the same with an snapshot):
Code:
apt install kpartx
############################################################
qm list                                                                                                                                         
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID       
       ...  
       108 debian-srv           stopped    768               12.00 0
############################################################
parted /dev/pve/vm-108-disk-1 print
Model: Linux device-mapper (thin) (dm)
Disk /dev/dm-10: 12.9GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  512MB   511MB   primary  ext2         boot
 2      512MB   12.9GB  12.4GB  primary               lvm
############################################################
kpartx -av /dev/pve/vm-108-disk-1
add map pve-vm--108--disk--1p1 (253:16): 0 997376 linear 253:10 2048
add map pve-vm--108--disk--1p2 (253:17): 0 24164352 linear 253:10 999424
############################################################
mkdir /mnt/test
mount /dev/mapper/pve-vm--108--disk--1p1 /mnt/test
# I can't mount pve-vm--108--disk--1p2, because it's from type lvm-member
# but i can activate this lvm too
############################################################
ls -lsa /mnt/test/
total 49697
    1 drwxr-xr-x 4 root root     1024 May 16  2017 .
    4 drwxr-xr-x 6 root root     4096 Jun 18 22:10 ..
  156 -rw-r--r-- 1 root root   157815 Mar  8  2017 config-3.16.0-4-amd64
  184 -rw-r--r-- 1 root root   186695 Mar 30  2017 config-4.9.0-2-amd64
    1 drwxr-xr-x 5 root root     1024 May 16  2017 grub
16866 -rw-r--r-- 1 root root 17201830 May 16  2017 initrd.img-3.16.0-4-amd64
19188 -rw-r--r-- 1 root root 19570497 May 16  2017 initrd.img-4.9.0-2-amd64
   12 drwx------ 2 root root    12288 Mar 18  2017 lost+found
  180 -rw-r--r-- 1 root root   182704 Jun 25  2015 memtest86+.bin
  182 -rw-r--r-- 1 root root   184840 Jun 25  2015 memtest86+_multiboot.bin
 2631 -rw-r--r-- 1 root root  2681172 Mar  8  2017 System.map-3.16.0-4-amd64
 3110 -rw-r--r-- 1 root root  3169870 Mar 30  2017 System.map-4.9.0-2-amd64
 3069 -rw-r--r-- 1 root root  3128784 Mar  8  2017 vmlinuz-3.16.0-4-amd64
 4113 -rw-r--r-- 1 root root  4193832 Mar 30  2017 vmlinuz-4.9.0-2-amd64
Udo
 
  • Like
Reactions: dbayer
Thanks Udo -

Still not sure I understand. On all of my ZFS Freenas system, all snapshots that I create I have immediate access to the snapshot via the .zfs directory. That is what I am attempting to accomplish here on Proxmox.

Or I guess the question should be: How do I create a filesystem snapshot of my containers located on ZFS on my Proxmox system?
 
Thanks Udo -

Still not sure I understand. On all of my ZFS Freenas system, all snapshots that I create I have immediate access to the snapshot via the .zfs directory. That is what I am attempting to accomplish here on Proxmox.

Or I guess the question should be: How do I create a filesystem snapshot of my containers located on ZFS on my Proxmox system?

You would have to do it within the VM rather than on the host.

As mentioned above, VMs get a zvol which is a virtual block device created from a file. It's not a normal filesystem. Snapshots of the zvol are there for rollback purposes and aren't directly browseable.
 
No problem. Perhaps you could run something like duplicati to backup your VMs to a local SMB share on the host. I know it's not a proper snapshot.

It's a shame that you don't seem to be able to create a clone of a VM by picking a specific snapshot. Presumably possible on ZFS.
The benefit for you would be to, simply take normal snapshots and if you need to get a file from an older snapshot, you simply clone it and then check out the file from within the VM. Still not as good as just being able to locally browse a directory which is what you want but would be handy nonetheless.
 
Oh, I can create a clone no problem but it is a manual process. I am just used to snapshot data on my FreeNAS systems being instantly available without having to clone them so I can grab a file here and there when a user does something stupid. The part I was missing and that you guys pointed out was that the VMs on Prox are block level devices and hence I have to clone it to get the information that I want out of it, no shortcuts allowed. :)

In the end, I am planning on replication of the dataset which will give me access like I want but just at the last snapshot/replication time. If I need more, I will just have to clone the snapshot. Just an extra step is all!!
 
Problem is you can't clone an arbitrary snapshot. You can only clone the current snapshot. You'd need to rollback to the snapshot you want to then clone it in your latter scenario.
 
??? I am not sure I understand what you are saying.

I have snapshots running every hour. I am able to take any one of those snapshots and clone it. I just tried it with the last three-hourly snapshots plus one from two weeks ago. They clone just fine.

Maybe I am misunderstanding what you are saying.

As a side note, I am NOT doing snapshots from within Proxmox via the GUI, but via zfs tools on the command line. Maybe that is where the disconnect is happening with our discussion. I only run ZFS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!