proxmox replication failed

Blisk

Member
Apr 16, 2022
40
1
13
I am trying to make replication from one node to another.
When I try replication I get this error


Replication Log

2024-03-31 17:08:02 111-0: start replication job
2024-03-31 17:08:02 111-0: guest => VM 111, running => 2342099
2024-03-31 17:08:02 111-0: volumes => DISK10TB:vm-111-disk-0,SAS3TB:vm-111-disk-0,SAS4TB:vm-111-disk-0
2024-03-31 17:08:04 111-0: (remote_prepare_local_job) zfs error: cannot open 'arhiv10tb': no such pool
2024-03-31 17:08:04 111-0: (remote_prepare_local_job)
2024-03-31 17:08:04 111-0: (remote_prepare_local_job) zfs error: cannot open 'arhiv10tb': no such pool
2024-03-31 17:08:04 111-0: (remote_prepare_local_job)
2024-03-31 17:08:04 111-0: (remote_prepare_local_job) could not activate storage 'DISK10TB', zfs error: cannot import 'arhiv10tb': no such pool available
2024-03-31 17:08:04 111-0: (remote_prepare_local_job)
2024-03-31 17:08:04 111-0: end replication job with error: command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=yourtop' root@192.168.0.222 -- pvesr prepare-local-job 111-0 DISK10TB:vm-111-disk-0 SAS3TB:vm-111-disk-0 SAS4TB:vm-111-disk-0 --last_sync 0' failed: exit code



When I try to import pool I get this error
root@pve:~# zpool import arhiv10tb
cannot import 'arhiv10tb': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name

root@pve:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
SAS3TB 2.72T 313G 2.41T - - 0% 11% 1.00x ONLINE -
SAS4TB 3.62T 1.93T 1.70T - - 0% 53% 1.00x ONLINE -
arhiv10tb 9.09T 1.19M 9.09T - - 0% 0% 1.00x ONLINE -
 
The arhiv10tb pool must also exist on your destination node. Replication task requires that the pool structure of your source is same as your destination pool.
Sorry I really don't know how to do that. I checked all options and didn't find where and how.
 
Could you post a screenshot of the disks from the first node?

And please the result of: cat /etc/pve/storage.cfg in [CODE][/CODE] tags, too.
 
this is first node
Code:
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

cifs: mybackup
        path /mnt/pve/mybackup
        server 192.168.0.234
        share backup
        content images,backup
        domain NAS165CEE
        prune-backups keep-all=1
        username backup

zfspool: SAS3TB
        pool SAS3TB
        content images,rootdir
        sparse 0

zfspool: SAS4TB
        pool SAS4TB
        content rootdir,images
        nodes pve,yourtop
        sparse 0

zfspool: DISK10TB
        pool arhiv10tb
        content rootdir,images
        nodes yourtop,pve
        sparse 0

zfspool: disk2Tb
        pool disk2Tb
        content rootdir,images
        mountpoint /disk2Tb
        nodes yourtop,pve
        sparse 0

zfspool: disk8tb
        pool disk8tb
        content rootdir,images
        mountpoint /disk8tb
        nodes pve,yourtop
        sparse 0

zfspool: arhiv10tb
        pool disk8tb
        content images,rootdir
        mountpoint /disk8tb
        nodes yourtop
        sparse 0

this is second node
Code:
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

cifs: mybackup
        path /mnt/pve/mybackup
        server 192.168.0.234
        share backup
        content images,backup
        domain NAS165CEE
        prune-backups keep-all=1
        username backup

zfspool: SAS3TB
        pool SAS3TB
        content images,rootdir
        sparse 0

zfspool: SAS4TB
        pool SAS4TB
        content rootdir,images
        nodes pve,yourtop
        sparse 0

zfspool: DISK10TB
        pool arhiv10tb
        content rootdir,images
        nodes yourtop,pve
        sparse 0

zfspool: disk2Tb
        pool disk2Tb
        content rootdir,images
        mountpoint /disk2Tb
        nodes yourtop,pve
        sparse 0

zfspool: disk8tb
        pool disk8tb
        content rootdir,images
        mountpoint /disk8tb
        nodes pve,yourtop
        sparse 0

zfspool: arhiv10tb
        pool disk8tb
        content images,rootdir
        mountpoint /disk8tb
        nodes yourtop
        sparse 0
1712775644776.png
 
Last edited:
Though i'm no expert on the specifics of proxmox configuration, your storage.cfg looks quite odd to me. There appear to be some mixed-up entries. For example:
Code:
zfspool: DISK10TB
        pool arhiv10tb
        content rootdir,images
        nodes yourtop,pve
        sparse 0
and
Code:
zfspool: arhiv10tb
        pool disk8tb
        content images,rootdir
        mountpoint /disk8tb
        nodes yourtop
        sparse 0

So, if i'm reading this correctly, you have an entry called DISK10TB in your proxmox configuration pointing to the arhiv10tb pool, and an entry called arhiv10tb pointing to disk8tb. Then to make it even weirder, you have
Code:
zfspool: disk8tb
        pool disk8tb
        content rootdir,images
        mountpoint /disk8tb
        nodes pve,yourtop
        sparse 0
Which is an entry called disk8tb pointing to the disk8tb pool....
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!