pve-zsync vm include no disk on zfs

smizzio77

Member
Oct 1, 2020
6
0
6
47
Hello everybody.
I have in my office a proxmox server with various virtual machines and my goal is to make a backup on another proxmox server located in another location.
When i try to make backup:
pve-zsync create --source 192.168.202.240:2005 --dest x.x.x.x:bcktest --maxsnap 7 --verbose --name 2005 -limit 512

I obtained this error:
Disk: "scsi0: vmdir2:2005/vm-2005-disk-0.raw,size=50G" has no valid zfs dataset format and will be skipped
Disk: "scsi1: vmdir2:2005/vm-2005-disk-1.raw,size=100G" has no valid zfs dataset format and will be skipped
Vm include no disk on zfs.

I check on my server in office with zflist and i obtained:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank2 820G 622G 384K /tank2
tank2/vmdata2 384K 622G 384K /tank2/vmdata2
tank2/vmdir2 819G 622G 819G /tank2/vmdir2

i checked situation in remote server and i obtained this:
zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bcktest 464G 540K 464G - - 0% 0% 1.00x ONLINE -
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 783M 8.9M 775M 2% /run
/dev/mapper/pve-root 28G 2.0G 24G 8% /
tmpfs 3.9G 43M 3.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/fuse 30M 16K 30M 1% /etc/pve
bcktest 450G 128K 450G 1% /bcktest
tmpfs 783M 0 783M 0% /run/user/0

Can anyone help me?
 
Hi,
pve-zsync only works with ZFS virtual block devices. What you have is a ZFS filesystem containing raw files representing block devices. Those files are not ZFS datasets themselves, and cannot be managed as such. If you do not create a directory (withing ZFS) as a storage, but use the ZFS directly for VM images, then it should work.
 
Last edited:
In the GUI with Datacenter > Storage > Add > ZFS you can add an existing ZFS as a storage directly. By default, content Disk Images should already be selected. Then from the Hardware view of your VM, you can do Move disk with the ZFS storage as the target.
 
  • Like
Reactions: AxelTwin
sorry if this is an old thread but I am having a similar issue:

root@backup1 ~ # pve-zsync sync --source 172.16.1.1:200 --dest rpool/data/Daily --name pve-zsync-daily --maxsnap 4 --method ssh Job --source 172.16.1.1:200 --name pve-zsync-daily got an ERROR!!! ERROR Message: Vm include no disk on zfs.

VM is created in a zfs storage. I tried to create a new zfs storage and move the vm's disk but still same issue
 
Hi,
could you share the VM configuration qm config 200, the output of pvesh get /storage/<your zfs storage ID> and the output of pveversion -v?
 
Thanks for replying,

Code:
agent: 1
bootdisk: virtio0
cores: 2
cpu: kvm64,flags=+aes
description: nom machine%3A LOGIKUTCH
keyboard: fr
localtime: 1
memory: 4096
name: logikutch.eec31.local
net0: virtio=3A:FE:2A:BE:1C:2B,bridge=vmbr31,firewall=1
numa: 0
onboot: 1
ostype: win7
parent: migration
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=fc1e57ce-a3b6-4e58-aaf5-0e33adf5af4d
sockets: 2
vga: virtio
virtio0: data:vm-200-disk-2,backup=0,cache=writeback,discard=on,size=100G
vmgenid: 802c7a47-9291-4586-8896-1ee15eb5f9a3

Code:
root@proxmox-1 /home/sam # pvesh get /storage/data
┌────────────┬──────────────────────────────────────────┐
│ key        │ value                                    │
╞════════════╪══════════════════════════════════════════╡
│ content    │ images,rootdir                           │
├────────────┼──────────────────────────────────────────┤
│ digest     │ cc29e257afa6e58f527de0666f58098f2173bbf2 │
├────────────┼──────────────────────────────────────────┤
│ mountpoint │ /rpool/data                              │
├────────────┼──────────────────────────────────────────┤
│ pool       │ rpool/data                               │
├────────────┼──────────────────────────────────────────┤
│ sparse     │ 1                                        │
├────────────┼──────────────────────────────────────────┤
│ storage    │ data                                     │
├────────────┼──────────────────────────────────────────┤
│ type       │ zfspool                                  │
└────────────┴──────────────────────────────────────────┘

Code:
 pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.114-1-pve)
pve-manager: 6.4-8 (running version: 6.4-8/185e14db)
pve-kernel-5.4: 6.4-2
pve-kernel-helper: 6.4-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksmtuned: 4.20150325+b1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-1
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-6
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
pve-zsync: 2.2
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
Last edited:
I think the problem is that the backup=0 flag is set on the disk. But the output/warning could be improved of course.
 
  • Like
Reactions: AxelTwin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!