Hello,
having two nodes. The second node is using the following versions:
Went to Datacenter -> Storage and created another ZFS storage (local-zfs2). Also in here edited local-zfs to be on node1 and local-zfs2 to node2. The idea is, to have local-zfs for node1 and local-zfs2 for node2.
Executed the following on node1:
Executed the following on node2:
But when I now e.g. Select a template or VM on Node1, click on clone, Target Node: Node2, target storage is only showing nfs but not the local-zfs2. But when I migrate a VM to node2, click on clone (as it is on node2), I then can select local-zfs2. Is this a bug, or simply not possible to clone a VM to another node and put the disk on its own local-zfs2 pool?
having two nodes. The second node is using the following versions:
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.64-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-5.15: 7.2-13
pve-kernel-helper: 7.2-13
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-4
libpve-guest-common-perl: 4.1-4
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.2-10
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-3
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-6
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-5
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
Went to Datacenter -> Storage and created another ZFS storage (local-zfs2). Also in here edited local-zfs to be on node1 and local-zfs2 to node2. The idea is, to have local-zfs for node1 and local-zfs2 for node2.
Executed the following on node1:
Bash:
# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:00:06 with 0 errors on Sun Nov 13 00:24:07 2022
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.002538cb61003c21-part3 ONLINE 0 0 0
nvme-eui.002538cb61003c1a-part3 ONLINE 0 0 0
errors: No known data errors
Bash:
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 476G 4.36G 472G - - 0% 0% 1.00x ONLINE -
Executed the following on node2:
Bash:
# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl
zfspool: local-zfs
pool rpool/data
content images,rootdir
nodes node1
sparse 1
nfs: nfs1
export /srv/nfs/mycluster
path /mnt/pve/nfs1
server IPADDRESS
content backup,snippets,iso,images,vztmpl,rootdir
prune-backups keep-all=1
pbs: ANOTHERSERVER1
datastore MYDATASTORE
server IPADDRESS
content backup
encryption-key aa:aa:aa:aa:aa:a...
fingerprint aa:aa:aa:aa:a...
prune-backups keep-all=1
username MYUSERNAME@pbs
zfspool: local-zfs2
pool rpool/data
content images,rootdir
mountpoint /rpool/data
nodes node2
sparse 1
Bash:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.76G 459G 104K /rpool
rpool/ROOT 1.75G 459G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.75G 459G 1.75G /
rpool/data 96K 459G 96K /rpool/data
Bash:
# zpool status
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.00080d02001d061a-part3 ONLINE 0 0 0
nvme-eui.0025388b91d66706-part3 ONLINE 0 0 0
errors: No known data errors
Bash:
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 476G 1.76G 474G - - 0% 0% 1.00x ONLINE -
But when I now e.g. Select a template or VM on Node1, click on clone, Target Node: Node2, target storage is only showing nfs but not the local-zfs2. But when I migrate a VM to node2, click on clone (as it is on node2), I then can select local-zfs2. Is this a bug, or simply not possible to clone a VM to another node and put the disk on its own local-zfs2 pool?