Hello,
I'm trying to clone a new CT #102 (From CT #139) from node A to node B. I've created a snapshot and then selected the target node in the clone option.
The issue I'm having is that the config file is now on the Node B but the ZFS dataset reside on node A. Starting the container result in an error since the dataset does not exist one node B.
Here are the cloning logs:
Both hosts are on the same version:
I can reproduce the issue everytime.
Any idea what might cause this ?
Thanks
I'm trying to clone a new CT #102 (From CT #139) from node A to node B. I've created a snapshot and then selected the target node in the clone option.
The issue I'm having is that the config file is now on the Node B but the ZFS dataset reside on node A. Starting the container result in an error since the dataset does not exist one node B.
Here are the cloning logs:
Code:
create full clone of mountpoint rootfs (local-zfs:subvol-139-disk-1)
Number of files: 1,678,763 (reg: 1,471,039, dir: 200,545, link: 7,136, dev: 2, special: 41)
Number of created files: 1,678,762 (reg: 1,471,039, dir: 200,544, link: 7,136, dev: 2, special: 41)
Number of deleted files: 0
Number of regular files transferred: 1,471,019
Total file size: 39,694,588,189 bytes
Total transferred file size: 39,592,003,893 bytes
Literal data: 39,592,003,893 bytes
Matched data: 0 bytes
File list size: 90,355,293
File list generation time: 0.003 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 39,744,331,560
Total bytes received: 29,098,133
sent 39,744,331,560 bytes received 29,098,133 bytes 10,377,933.38 bytes/sec
total size is 39,694,588,189 speedup is 1.00
TASK OK
Both hosts are on the same version:
Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.13: 5.1-44
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-3-pve: 4.13.13-34
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.4-1-pve: 4.13.4-26
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9
I can reproduce the issue everytime.
Any idea what might cause this ?
Thanks