No replicable volumes found?

Feb 14, 2021
41
2
13
68
Denmark
I try to replicate a VM from one node to another but get the above message. I've searched the forums, and the solution to this problem seems to be to make sure that the VM volumes aren't marked with "replicate=0". Well, I don't think I have this problem, please take a look here on the specification in /etc/pve/qemu-server 103.conf:

Code:
boot: order=scsi0;ide2;net0
cores: 2
ide2: none,media=cdrom
memory: 20480
name: SME10
net0: virtio=DA:34:09:6F:DB:38,bridge=vmbr1,firewall=1,link_down=1
net1: e1000=6A:D3:24:DF:42:79,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-102-disk-0,size=512G
scsihw: virtio-scsi-pci
smbios1: uuid=9d410adc-5bcd-439a-94f0-514185fcd010
sockets: 1
startup: up=10
vmgenid: 02d055f1-1124-4500-a09a-b7f3da050546

What might be the problem here?
 
Hi,
please post the output of pveversion -v and pvesh get /storage/local-zfs --output-format json-pretty. Maybe the problem is the mismatch between the VM ID and disk ID?
 
Thanks a lot for replying. You're right, there's a mismatch between the VM ID and the disk ID. I will get this fixed, next time I shut down the node. Below is the output you ask for.

Code:
root@sonja:~# pveversion -v | column
proxmox-ve: 7.2-1 (running kernel: 5.15.60-2-pve)       libjs-extjs: 7.0.0-1                                    proxmox-widget-toolkit: 3.5.1
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)  libknet1: 1.24-pve1                                     pve-cluster: 7.2-2
pve-kernel-helper: 7.2-13                               libproxmox-acme-perl: 1.4.2                             pve-container: 4.2-2
pve-kernel-5.15: 7.2-12                                 libproxmox-backup-qemu0: 1.3.1-1                        pve-docs: 7.2-2
pve-kernel-5.13: 7.1-9                                  libpve-access-control: 7.2-4                            pve-edk2-firmware: 3.20220526-1
pve-kernel-5.11: 7.0-10                                 libpve-apiclient-perl: 3.2-1                            pve-firewall: 4.2-6
pve-kernel-5.15.60-2-pve: 5.15.60-2                     libpve-common-perl: 7.2-3                               pve-firmware: 3.5-4
pve-kernel-5.15.60-1-pve: 5.15.60-1                     libpve-guest-common-perl: 4.1-3                         pve-ha-manager: 3.4.0
pve-kernel-5.13.19-6-pve: 5.13.19-15                    libpve-http-server-perl: 4.1-4                          pve-i18n: 2.7-2
pve-kernel-5.13.19-2-pve: 5.13.19-4                     libpve-storage-perl: 7.2-10                             pve-qemu-kvm: 7.0.0-3
pve-kernel-5.11.22-7-pve: 5.11.22-12                    libspice-server1: 0.14.3-2.1                            pve-xtermjs: 4.16.0-1
pve-kernel-5.11.22-1-pve: 5.11.22-2                     lvm2: 2.03.11-2.1                                       qemu-server: 7.2-4
ceph-fuse: 15.2.13-pve1                                 lxc-pve: 5.0.0-3                                        smartmontools: 7.2-pve3
corosync: 3.1.5-pve2                                    lxcfs: 4.0.12-pve1                                      spiceterm: 3.2-2
criu: 3.15-1+pve-1                                      novnc-pve: 1.3.0-3                                      swtpm: 0.7.1~bpo11+1
glusterfs-client: 9.2-1                                 proxmox-backup-client: 2.2.7-1                          vncterm: 1.7-1
ifupdown2: 3.1.0-1+pmx3                                 proxmox-backup-file-restore: 2.2.7-1                    zfsutils-linux: 2.1.6-pve1
ksm-control-daemon: 1.4-1                               proxmox-mini-journalreader: 1.3-1
Code:
root@sonja:~# pvesh get /storage/local-zfs --output-format json-pretty
{
   "content" : "rootdir,images",
   "digest" : "aeba625d2db60125f2914549b9ea68b02afac6dd",
   "pool" : "rpool/data",
   "sparse" : 1,
   "storage" : "local-zfs",
   "type" : "zfspool"
}
 
Thanks a lot for replying. You're right, there's a mismatch between the VM ID and the disk ID. I will get this fixed, next time I shut down the node.
I checked the code and yes, replication checks that the ID matches. The node needs to be running if you want to execute the rename command ;) But the VM needs to be shut down of course.
 
I tried to rename:

  1. Shutdown the VM
  2. Code:
    zfs rename /rpool/data/vm-102-disk-0 /rpool/data/vm-103-disk-0
I then in the GUI tried to edit the hardware configuration for VM 103. But I wasn't able to change Hard Disk (scsi0). Must I detach the hard disk "local-zfs:vm-102-disk-0" and add new virtual disk instead (local-zfs:vm-103-disk-0)?

Capture.PNG
 
Last edited:
I tried to rename:

  1. Shutdown the VM
  2. Code:
    zfs rename /rpool/data/vm-102-disk-0 /rpool/data/vm-103-disk-0
I then in the GUI tried to edit the hardware configuration for VM 103. But I wasn't able to change Hard Disk (scsi0). Must I detach the hard disk "local-zfs:vm-102-disk-0" and add new virtual disk instead (local-zfs:vm-103-disk-0)?

View attachment 42934
You can either use qm set 103 --scsi0 local-zfs:vm-103-disk-0 (please check if the Boot Order setting under the Options tab still contains scsi0 afterwards). Or you can manually edit the config file.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!