pve-zsync --maxsnap option.

Where does --maxsnap keeps snapshots (local or remote node?) and how to use/restore them?
On both.
You can mount and restore the snapshots. For example send a snapshot:
Code:
zfs send -v rpool/home@bla1 | ssh otherhost "/usr/sbin/zfs receive otherpool/home@bla1"
Or mount a snapshot:
Code:
mount -t zfs v-machines/home@rep_home_2017-07-05_00:36:48 /mnt/zfsmountsnap
To make the whole thing automatic and elegant with ZFS made the .zfs folder visible in datasets. For example,
Code:
zfs set snapdir=visible  v-machines/home
Thereafter, the desired snapshot is automatically mounted when accessing
Code:
.zfs / snapshot / rep_home_2017-07-16_00: 01: 25
 
  • Like
Reactions: ebiss
Thanks a lot. One more q. When using cron to sync disks and main host is totally down. How can I "LIST" snapshots and restore it?
Also as pve-zsync syncs disk images, why would we still need to take snap shot? Just for rollback to a previous state?
 
Hi,

You can see all the snapshots with:

zfs list -t all

pve-zsync use snaphots when is running. And yes you can use any snapshot for rollback, or for create a clone of it. Both are very useful and someday you will be very happy because zfs can do this magic.
As a simple example if your very important VM, have a lot of encrypted files (from a virus), you can rollback to a unaffected situation using a old snapshot.

Good luck to you and to all good guys who develop zfs.
 
I'm working on mounting snapshots myself. I've been following the above suggestions, but I can never seem to find the snapshot. It's not listed in the .zfs folder nor does using snapdir=visible show it. The snapshots directory is totally empty. These snapshots are being created by pve-zsync.

Thanks for any help,
Daniel
 
I'm working on mounting snapshots myself. I've been following the above suggestions, but I can never seem to find the snapshot. It's not listed in the .zfs folder nor does using snapdir=visible show it. The snapshots directory is totally empty. These snapshots are being created by pve-zsync.
If you use pve-zsyn for zvols, you can not see anything in the .zfs folder. The best way to see any snapshot is like I said:

zfs list -t any
 
Hi Guletz,

Thank you for the response. These aren't zvols, they are raw disk images. I do see the snapshots with the zfs list command. But I can't use the mount command, it says it's unable to fetch. For example

Code:
mount -t zfs dpool/vmdata/vm-150-disk-1@rep_150-15min_2017-12-21_10:34:18 /mnt/zfsmountsnap

gives this error message

Code:
unable to fetch ZFS version for filesystem 'dpool/vmdata/vm-150-disk-1@rep_150-15min_2017-12-21_10:34:18'
 
From what I can guess, I think it is a zvol and not a dataset. A zfs list could be healpful to see if I am wrong or not.
 
Here's the status of the pool

Code:
zpool status bpool
  pool: bpool
 state: ONLINE
  scan: none requested
config:

        NAME                        STATE     READ WRITE CKSUM
        bpool                       ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x5000xxxxxxxxf04c  ONLINE       0     0     0
            wwn-0x5000xxxxxxxxacf8  ONLINE       0     0     0
          mirror-1                  ONLINE       0     0     0
            wwn-0x5000xxxxxxxx24e4  ONLINE       0     0     0
            wwn-0x5000xxxxxxxx3f28  ONLINE       0     0     0

errors: No known data errors

And here's a partial of the zfs list -t all for this particular snapshot

Code:
bpool/vmback/15min/vm-150-disk-1                                    2.13G  6.54T  2.08G  -
bpool/vmback/15min/vm-150-disk-1@rep_150-15min_2017-12-25_12:00:17   224K      -  1.89G  -

I can say that I personally did not create an ZVol's. But I don't know what Proxmox creates, when you add a harddrive through it's VM Creation Wizard.

Thanks for the help,
Daniel
 
I've read on some other sites, that you need to clone snapshots of ZVol's to mount them. Could this be the reason for my error message above...

Code:
unable to fetch ZFS version for filesystem 'dpool/vmdata/vm-150-disk-1@rep_150-15min_2017-12-21_10:34:18'
 
The error message have appear because you can not mount directly a zvol. The zfs mount data@snapshot can be use only for datasets(who is like a file sistem). For zvol, you need to clone your zvol snapshot:

Code:
zfs clone dpool/vmdata/vm-150-disk-1@rep_150-15min_2017-12-21_10:34:18 dpool/vmdata/vm-150-disk-1-clone

and then you can mount the desired partition from this block device(if you have many partitions). You can see what partition on this clone with this:


Code:
 fdisk -l /dev/zvol/dpool/vmdata/vm-150-disk-1-clone
 
  • Like
Reactions: dbayer and fireon
Thank you Guletz for that answer. I have one more question.
You say to use fdisk to see the partitions, will this work with an LVM managed disk?
Separately will fdisk recognize partitions with NTFS? And thus allow mount to mount an NTFS partition?

Thank you again for your help!
Daniel
 
You say to use fdisk to see the partitions, will this work with an LVM managed disk?

yes. Lvm is a volume manager very similar with zfs.

Separately will fdisk recognize partitions with NTFS?

ntfs is not a partion is a file system. fdisk can read any particular partitioning, in 10 years of using fdisk it show me that it can read any kind of partition.

And thus allow mount to mount an NTFS partition?

If your sistem can mount a ntfs File-System then yes, you can.



Note: try to use the corect terms (partition/file-sistem/and so on). Using wrong terms in a question can result wrong answer. Sorry for this ... I was a teacher long time ago, so I have some bad habits :)
 
Last edited:
  • Like
Reactions: dbayer
Ok, I've got a clone made of a snapshot. I can use fdisk to see the partitions. I'm trying to mount the second partition, but I'm getting errors. Here are the commands and errors.
Code:
root@proxmox:/mnt# fdisk -l /dev/zvol/dpool/vmdata/vm-150-disk-1-clone
Disk /dev/zvol/dpool/vmdata/vm-150-disk-1-clone: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 131072 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disklabel type: dos
Disk identifier: 0x5006970f

Device                                      Boot   Start      End  Sectors  Size Id Type
/dev/zvol/dpool/vmdata/vm-150-disk-1-clone1 *       2048   999423   997376  487M 83 Linux
/dev/zvol/dpool/vmdata/vm-150-disk-1-clone2      1001470 67106815 66105346 31.5G  5 Extended
/dev/zvol/dpool/vmdata/vm-150-disk-1-clone5      1001472 67106815 66105344 31.5G 8e Linux LVM

Partition 2 does not start on physical sector boundary.
root@proxmox:/mnt# mount -t zfs /dev/zvol/dpool/vmdata/vm-150-disk-1-clone2 /mnt/test
filesystem '/dev/zvol/dpool/vmdata/vm-150-disk-1-clone2' cannot be mounted, unable to open the dataset
root@proxmox:/mnt#
root@proxmox:/mnt# mount /dev/zvol/dpool/vmdata/vm-150-disk-1-clone2 /mnt/test
mount: special device /dev/zvol/dpool/vmdata/vm-150-disk-1-clone2 does not exist
root@themis:/mnt#

Thanks again for the help
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!