HA replication

Hello wondimu,

May we know more about the Disks? Or Perhaps get a PVE report? It is possible that only some of the Disk are stored on media that allows replication.
Report can be generated by going to DataCenter > Node > Subscription > Click System Report > Click Download
 
I suspect this might be a ZFS-related issue or perhaps a configuration mismatch with the storage backend. Could you please check the replication status and ZFS dataset health first?

You can use the following commands to provide more details:

  1. To check the detailed replication status:pvesr status -v
  2. To verify the ZFS datasets and their locations: zfs list
This will help confirm if all virtual disks are correctly placed on the ZFS pool and recognized by the replication service.
 
I suspect this might be a ZFS-related issue or perhaps a configuration mismatch with the storage backend. Could you please check the replication status and ZFS dataset health first?

You can use the following commands to provide more details:

  1. To check the detailed replication status:pvesr status -v
  2. To verify the ZFS datasets and their locations: zfs list
This will help confirm if all virtual disks are correctly placed on the ZFS pool and recognized by the replication service.
In Node-07: the 135 exist at the moment:
pvesr status
JobID Enabled Target LastSync NextSync Duration FailCount State
135-0 Yes local/WSMPVE006NAV 2026-03-06_11:02:26 2026-03-06_11:05:00 80.55692 0 SYNCING
148-0 Yes local/WSMPVE006NAV 2026-03-06_11:00:04 2026-03-06_11:15:00 142.090654 0 OK

Node-6: the 135 replicate at the moment:
pvesr status
JobID Enabled Target LastSync NextSync Duration FailCount State
134-0 Yes local/WSMPVE007NAV 2026-03-06_11:00:26 2026-03-06_11:10:00 45.040545 0 SYNCING
147-0 Yes local/WSMPVE007NAV 2026-03-06_11:00:03 2026-03-06_11:30:00 9.451627 0 OK
151-0 Yes local/WSMPVE007NAV 2026-03-06_11:00:13 2026-03-06_11:30:00 13.79764 0 OK



root@WSMPVE007NAV:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 290G 570G 104K /rpool
rpool/ROOT 23.2G 570G 96K /rpool/ROOT
rpool/ROOT/pve-1 23.2G 570G 23.2G /
rpool/data 266G 570G 96K /rpool/data-
rpool/data/vm-135-disk-0 61.1G 570G 61.1G -
rpool/data/vm-135-disk-1 88K 570G 88K -
rpool/data/vm-135-disk-2 92K 570G 92K -
rpool/data/vm-135-disk-3 57.5G 570G 57.4G -

zfs-data-1 114T 15.4T 171K /zfs-data-1
zfs-data-1/vm-135-disk-0 3.58T 15.4T 3.58T -
zfs-data-1/vm-135-disk-1 1.83T 15.4T 1.83T -
zfs-data-1/vm-135-disk-10 1.12T 15.4T 1.12T -
zfs-data-1/vm-135-disk-11 958G 15.4T 958G -
zfs-data-1/vm-135-disk-12 299K 15.4T 299K -
zfs-data-1/vm-135-disk-13 355K 15.4T 355K -
zfs-data-1/vm-135-disk-14 355K 15.4T 355K -
zfs-data-1/vm-135-disk-15 476K 15.4T 476K -
zfs-data-1/vm-135-disk-16 9.73T 15.4T 9.73T -
zfs-data-1/vm-135-disk-17 6.55T 15.4T 6.55T -
zfs-data-1/vm-135-disk-18 1.08T 15.4T 1.08T -
zfs-data-1/vm-135-disk-19 1.12T 15.4T 1.12T -
zfs-data-1/vm-135-disk-2 1.14T 15.4T 1.14T -
zfs-data-1/vm-135-disk-20 958G 15.4T 958G -
zfs-data-1/vm-135-disk-21 299K 15.4T 299K -
zfs-data-1/vm-135-disk-22 355K 15.4T 355K -
zfs-data-1/vm-135-disk-23 355K 15.4T 355K -
zfs-data-1/vm-135-disk-24 476K 15.4T 476K -
zfs-data-1/vm-135-disk-25 9.74T 15.4T 9.73T -
zfs-data-1/vm-135-disk-26 2.37T 15.4T 2.37T -
zfs-data-1/vm-135-disk-27 1.04T 15.4T 1.04T -
zfs-data-1/vm-135-disk-28 1.07T 15.4T 1.07T -
zfs-data-1/vm-135-disk-29 1.57T 15.4T 1.57T -
zfs-data-1/vm-135-disk-3 10.3T 15.4T 10.3T -
zfs-data-1/vm-135-disk-30 761K 15.4T 761K -
zfs-data-1/vm-135-disk-31 2.44T 15.4T 2.44T -
zfs-data-1/vm-135-disk-32 861G 15.4T 861G -
zfs-data-1/vm-135-disk-33 658G 15.4T 658G -
zfs-data-1/vm-135-disk-4 1.83T 15.4T 1.83T -
zfs-data-1/vm-135-disk-5 1.58T 15.4T 1.58T -
zfs-data-1/vm-135-disk-6 960K 15.4T 960K -
zfs-data-1/vm-135-disk-7 3.57T 15.4T 3.57T -
zfs-data-1/vm-135-disk-8 658G 15.4T 658G -
zfs-data-1/vm-135-disk-9 1.07T 15.4T 1.07T -
-


zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 226G 635G 104K /rpool
rpool/ROOT 11.2G 635G 96K /rpool/ROOT
rpool/ROOT/pve-1 11.2G 635G 11.2G /
rpool/data 214G 635G 96K /rpool/data
rpool/data/vm-135-disk-2 92K 635G 92K -
rpool/data/vm-135-disk-3 57.4G 635G 57.4G -

zfs-data-1 76.4T 52.8T 171K /zfs-data-1
zfs-data-1/vm-135-disk-17 6.55T 52.8T 6.55T -
zfs-data-1/vm-135-disk-18 1.08T 52.8T 1.08T -
zfs-data-1/vm-135-disk-19 1.12T 52.8T 1.12T -
zfs-data-1/vm-135-disk-20 958G 52.8T 958G -
zfs-data-1/vm-135-disk-21 299K 52.8T 299K -
zfs-data-1/vm-135-disk-22 355K 52.8T 355K -
zfs-data-1/vm-135-disk-23 355K 52.8T 355K -
zfs-data-1/vm-135-disk-24 476K 52.8T 476K -
zfs-data-1/vm-135-disk-25 9.73T 52.8T 9.73T -
zfs-data-1/vm-135-disk-26 2.37T 52.8T 2.37T -
zfs-data-1/vm-135-disk-27 1.04T 52.8T 1.04T -
zfs-data-1/vm-135-disk-28 1.07T 52.8T 1.07T -
zfs-data-1/vm-135-disk-29 1.57T 52.8T 1.57T -
zfs-data-1/vm-135-disk-30 761K 52.8T 761K -
zfs-data-1/vm-135-disk-31 2.44T 52.8T 2.44T -
zfs-data-1/vm-135-disk-32 861G 52.8T 861G -
zfs-data-1/vm-135-disk-33 658G 52.8T 658G -
 
Hi Wondimu,

Is it possible that these specific disks have the "Skip replication" option checked? If this option is enabled, the disk will be excluded from the ZFS replication job.

You can verify the configuration for VM 135 via CLI with:

qm config 135

Check the disk lines (e.g., scsi0, virtio0). If you see replicate=0 in the string, that's the culprit. Example:

scsi0: zfs-data-1/vm-135-disk-14,replicate=0,size=15.4T

If that is the case, you can simply uncheck the box in the Hardware GUI or remove the parameter from the config file.
 
  • Like
Reactions: UdoB and Johannes S