[SOLVED] vm migration mit zfs legt dataset doppelt an

Jul 28, 2020
12
3
23
Hallo,

ich habe folgendes Phänomen mit einem Cluster (gestern frisch installiertes PVE 6.4-8).

Wenn ich eine vm von einem Node auf den anderen migriere wird ein zusätzlicher dataset in zfs auf dem Ziel-System angelegt.
So werden es mit jeder Migration doppelt so viele wie vorher (1 -> 2, 2 -> 4, 4-> 8. usw).

VM config
Code:
root@web-px1:~# qm config 221
boot: order=scsi0;ide2;net0
cores: 4
ide2: isos_templates:iso/ubuntu-20.04.2-live-server-amd64.iso,media=cdrom
memory: 4096
name: vmtest
net0: virtio=66:DB:B9:18:65:3B,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: vms:vm-221-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=f66b03ea-f762-4df5-af5e-e448b9bfdb5f
sockets: 1
vmgenid: 585dfb19-68be-411c-a2d6-2dab52af7dae

zfs dataset
Code:
root@web-px1:~# zfs list -t all
NAME                       USED  AVAIL     REFER  MOUNTPOINT
...
rpool/data/vm-221-disk-0    56K  3.51T       56K  -

Migration auf zweiten Node
Code:
root@web-px1:~# qm migrate 221 web-px2 --with-local-disks
can't migrate running VM without --online
root@web-px1:~# qm migrate 221 web-px2 --with-local-disks --online
2021-06-25 12:49:41 starting migration of VM 221 to node 'web-px2' (172.17.2.112)
2021-06-25 12:49:41 found local disk 'container:vm-221-disk-0' (via storage)
2021-06-25 12:49:41 found local disk 'vms:vm-221-disk-0' (in current VM config)
2021-06-25 12:49:41 copying local disk images
2021-06-25 12:49:41 using a bandwidth limit of 104857600 bps for transferring 'container:vm-221-disk-0'
2021-06-25 12:49:42 full send of rpool/data/vm-221-disk-0@__migration__ estimated size is 29.1K
2021-06-25 12:49:42 total estimated size is 29.1K
2021-06-25 12:49:44 successfully imported 'container:vm-221-disk-0'
2021-06-25 12:49:44 volume 'container:vm-221-disk-0' is 'container:vm-221-disk-0' on the target
2021-06-25 12:49:44 starting VM 221 on remote node 'web-px2'
2021-06-25 12:49:47 volume 'vms:vm-221-disk-0' is 'vms:vm-221-disk-1' on the target
2021-06-25 12:49:47 start remote tunnel
2021-06-25 12:49:47 ssh tunnel ver 1
2021-06-25 12:49:47 starting storage migration
2021-06-25 12:49:47 scsi0: start migration to nbd:unix:/run/qemu-server/221_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0 with bandwidth limit: 102400 KB/s
drive-scsi0: transferred 0.0 B of 32.0 GiB (0.00%) in 0s
...

dataset auf Ziel-Host:
Code:
root@web-px2:~# zfs list -t all
NAME                       USED  AVAIL     REFER  MOUNTPOINT
...
rpool/data/vm-221-disk-0    56K  3.51T       56K  -
rpool/data/vm-221-disk-1    56K  3.51T       56K  -


Migration wieder zurück auf ersten Node:
Code:
root@web-px2:~# qm migrate 221 web-px1 --with-local-disks --online
2021-06-25 12:56:38 starting migration of VM 221 to node 'web-px1' (172.17.2.111)
2021-06-25 12:56:38 found local disk 'container:vm-221-disk-0' (via storage)
2021-06-25 12:56:38 found local disk 'container:vm-221-disk-1' (via storage)
2021-06-25 12:56:38 found local disk 'vms:vm-221-disk-0' (via storage)
2021-06-25 12:56:38 found local disk 'vms:vm-221-disk-1' (in current VM config)
2021-06-25 12:56:38 copying local disk images
2021-06-25 12:56:38 using a bandwidth limit of 104857600 bps for transferring 'container:vm-221-disk-0'
2021-06-25 12:56:40 full send of rpool/data/vm-221-disk-0@__migration__ estimated size is 29.1K
2021-06-25 12:56:40 total estimated size is 29.1K
2021-06-25 12:56:41 successfully imported 'container:vm-221-disk-0'
2021-06-25 12:56:41 volume 'container:vm-221-disk-0' is 'container:vm-221-disk-0' on the target
2021-06-25 12:56:41 using a bandwidth limit of 104857600 bps for transferring 'container:vm-221-disk-1'
2021-06-25 12:56:42 full send of rpool/data/vm-221-disk-1@__migration__ estimated size is 29.1K
2021-06-25 12:56:42 total estimated size is 29.1K
2021-06-25 12:56:44 successfully imported 'container:vm-221-disk-1'
2021-06-25 12:56:44 volume 'container:vm-221-disk-1' is 'container:vm-221-disk-1' on the target
2021-06-25 12:56:44 using a bandwidth limit of 104857600 bps for transferring 'vms:vm-221-disk-0'
2021-06-25 12:56:45 full send of rpool/data/vm-221-disk-0@__migration__ estimated size is 29.1K
2021-06-25 12:56:45 total estimated size is 29.1K
2021-06-25 12:56:45 volume 'rpool/data/vm-221-disk-0' already exists - importing with a different name
2021-06-25 12:56:46 successfully imported 'vms:vm-221-disk-2'
2021-06-25 12:56:46 volume 'vms:vm-221-disk-0' is 'vms:vm-221-disk-2' on the target
2021-06-25 12:56:46 starting VM 221 on remote node 'web-px1'
2021-06-25 12:56:49 volume 'vms:vm-221-disk-1' is 'vms:vm-221-disk-3' on the target
2021-06-25 12:56:49 start remote tunnel
2021-06-25 12:56:50 ssh tunnel ver 1
2021-06-25 12:56:50 starting storage migration
2021-06-25 12:56:50 scsi0: start migration to nbd:unix:/run/qemu-server/221_nbd.migrate:exportname=drive-scsi0
drive mirror is starting for drive-scsi0 with bandwidth limit: 102400 KB/s
drive-scsi0: transferred 0.0 B of 32.0 GiB (0.00%) in 0s
...

dataset auf erstem Node
Code:
root@web-px1:~# zfs list -t all
NAME                       USED  AVAIL     REFER  MOUNTPOINT
...
rpool/data/vm-221-disk-0    56K  3.51T       56K  -
rpool/data/vm-221-disk-1    56K  3.51T       56K  -
rpool/data/vm-221-disk-2    56K  3.51T       56K  -
rpool/data/vm-221-disk-3    56K  3.51T       56K  -

VM config
Code:
root@web-px1:~# qm config 221
boot: order=scsi0;ide2;net0
cores: 4
ide2: isos_templates:iso/ubuntu-20.04.2-live-server-amd64.iso,media=cdrom
memory: 4096
name: vmtest
net0: virtio=66:DB:B9:18:65:3B,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: vms:vm-221-disk-3,format=raw,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=f66b03ea-f762-4df5-af5e-e448b9bfdb5f
sockets: 1
vmgenid: 585dfb19-68be-411c-a2d6-2dab52af7dae
 
wies sieht denn das file /etc/pve/storage.cfg aus ?
 
2021-06-25 12:49:41 found local disk 'container:vm-221-disk-0' (via storage)
2021-06-25 12:49:41 found local disk 'vms:vm-221-disk-0' (in current VM config)
ich vermute mal dass das gleiche storage 2mal eingebunden ist, dh finder er jedes image doppelt (und migriert es auch 2mal)

lösung -> ein storage nur 1mal einbinden
 
Hallo,

also storage.cfg sah so aus:

Code:
root@web-px1:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,backup,iso

zfspool: local-zfs
        disable
        pool rpool/data
        content rootdir,images
        sparse 1

zfspool: container
        pool rpool/data
        content rootdir
        mountpoint /rpool/data
        sparse 1

zfspool: vms
        pool rpool/data
        content images
        mountpoint /rpool/data
        sparse 1

nfs: isos_templates
        export /data/exports/proxmox/templates_isos
        path /mnt/pve/isos_templates
        server xxxxxxxxxxxxxxxxx
        content vztmpl,iso
        prune-backups keep-all=1

Nachdem ich den zfspool container deaktiviert hatte funktioniert es wie gewohnt.
Das war mir nicht bewusst dass das ein Problem ist.

Danke!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!