ZFS Replication does full instead of incremental

romainr

Member
Nov 22, 2021
6
0
6
31
Hello

We encounter difficulties with ZFS replication between two hosts.

Some replication goes into error randomly because Proxmox try to send a full backup even if it should be incremental.
We have already try to delete replication job and disk on destination and re-create the replication job without success (it does the first full and after some time it decide to do another full)

Code:
2021-11-22 11:17:02 1039-0: start replication job
2021-11-22 11:17:02 1039-0: guest => VM 1039, running => 2693527
2021-11-22 11:17:02 1039-0: volumes => local-zfs:vm-1039-disk-0
2021-11-22 11:17:03 1039-0: delete stale replication snapshot '__replicate_1039-0_1637575917__' on local-zfs:vm-1039-disk-0
2021-11-22 11:17:04 1039-0: (remote_prepare_local_job) delete stale replication snapshot '__replicate_1039-0_1637575917__' on local-zfs:vm-1039-disk-0
2021-11-22 11:17:04 1039-0: freeze guest filesystem
2021-11-22 11:17:04 1039-0: create snapshot '__replicate_1039-0_1637576222__' on local-zfs:vm-1039-disk-0
2021-11-22 11:17:04 1039-0: thaw guest filesystem
2021-11-22 11:17:04 1039-0: using insecure transmission, rate limit: none
2021-11-22 11:17:04 1039-0: full sync 'local-zfs:vm-1039-disk-0' (__replicate_1039-0_1637576222__)
2021-11-22 11:17:07 1039-0: full send of local-zfs/vm-1039-disk-0@__replicate_1039-0_1637576222__ estimated size is 2.31G
2021-11-22 11:17:07 1039-0: total estimated size is 2.31G
2021-11-22 11:17:07 1039-0: warning: cannot send 'local-zfs/vm-1039-disk-0@__replicate_1039-0_1637576222__': Broken pipe
2021-11-22 11:17:07 1039-0: cannot send 'local-zfs/vm-1039-disk-0': I/O error
2021-11-22 11:17:07 1039-0: command 'zfs send -Rpv -- local-zfs/vm-1039-disk-0@__replicate_1039-0_1637576222__' failed: exit code 1
2021-11-22 11:17:07 1039-0: [pve-int-02] volume 'local-zfs/vm-1039-disk-0' already exists
2021-11-22 11:17:07 1039-0: delete previous replication snapshot '__replicate_1039-0_1637576222__' on local-zfs:vm-1039-disk-0
2021-11-22 11:17:07 1039-0: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:vm-1039-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_1039-0_1637576222__' failed: exit code 1

The two hosts are on same version :
proxmox-ve: 7.1-1 (running kernel: 5.13.19-1-pve) pve-manager: 7.1-4 (running version: 7.1-4/ca457116) pve-kernel-5.13: 7.1-4 pve-kernel-helper: 7.1-4 pve-kernel-5.11: 7.0-10 pve-kernel-5.13.19-1-pve: 5.13.19-2 pve-kernel-5.11.22-7-pve: 5.11.22-12 pve-kernel-5.11.22-4-pve: 5.11.22-9 pve-kernel-5.11.22-1-pve: 5.11.22-2 ceph-fuse: 15.2.13-pve1 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.22-pve2 libproxmox-acme-perl: 1.4.0 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.1-1 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.0-14 libpve-guest-common-perl: 4.0-3 libpve-http-server-perl: 4.0-3 libpve-storage-perl: 7.0-15 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.9-4 lxcfs: 4.0.8-pve2 novnc-pve: 1.2.0-3 openvswitch-switch: 2.15.0+ds1-2 proxmox-backup-client: 2.0.14-1 proxmox-backup-file-restore: 2.0.14-1 proxmox-mini-journalreader: 1.2-1 proxmox-widget-toolkit: 3.4-2 pve-cluster: 7.1-2 pve-container: 4.1-2 pve-docs: 7.1-2 pve-edk2-firmware: 3.20210831-2 pve-firewall: 4.2-5 pve-firmware: 3.3-3 pve-ha-manager: 3.3-1 pve-i18n: 2.6-1 pve-qemu-kvm: 6.1.0-2 pve-xtermjs: 4.12.0-1 pve-zsync: 2.2 qemu-server: 7.1-3 smartmontools: 7.2-1 spiceterm: 3.2-2 swtpm: 0.7.0~rc1+2 vncterm: 1.7-1 zfsutils-linux: 2.1.1-pve3

Dont hesitate to ask for more information that could help.
 
Last edited:
I can confirm the issue.

The problem has begun for us after upgrading from 7.0 to 7.1 (replication worked fine in the previous 3 months)
 
Hi,
could you share the output of
Code:
qm config 1039
cat /etc/pve/replication.cfg
and on both hosts:
Code:
zfs list -s creation -H -p -t snapshot -o name,creation,guid local-zfs/vm-1039-disk-0

Did you make any other operations with snapshots on the VM, i.e. snapshot/rollback/remove snapshot? Or do you have any other snapshot tool operating on the same dataset?

EDIT: include -p for zfs command.
 
Last edited:
The syslog from around the time the issue happened might also contain useful information.
 
Hello
Thank for your feedback
At the moment, we delete every replication.
Another technician report to have seen duplicate replication when he choose datacenter => Replication (he seen replication from host1 => host2 and from host2 => host1 for every VMs that were in error state)
Since we have deleted every replication / disk on the other node (and the other way arround for the secondary node) include zfs snapshot via "zfs destroy" it seems to work.
(About 2To have been duplicated between hosts in RAIDZ2 of 8 SSD with 10Gb ETH make a significant load on servers)

We were doing replication every 5 minutes and we now do every 15 minutes in order to be sure that is not a ressource bottleneck.
 
Hello,
same pb for me after upgrade PVE 7.0 to 7.1-5 :
1637827276126.png
Entête
Proxmox
Virtual Environment 7.1-5


Nœud 've-ara-23'










Journaux












()


2021-11-25 08:51:05 692013-1: start replication job
2021-11-25 08:51:05 692013-1: guest => VM 692013, running => 11312
2021-11-25 08:51:05 692013-1: volumes => local-zfs:vm-692013-disk-0
2021-11-25 08:51:05 692013-1: delete stale replication snapshot '__replicate_692013-1_1637824865__' on local-zfs:vm-692013-disk-0
2021-11-25 08:51:05 692013-1: delete stale replication snapshot error: zfs error: cannot destroy snapshot rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__: dataset is busy

2021-11-25 08:51:07 692013-1: freeze guest filesystem
2021-11-25 08:51:07 692013-1: create snapshot '__replicate_692013-1_1637826665__' on local-zfs:vm-692013-disk-0
2021-11-25 08:51:07 692013-1: thaw guest filesystem
2021-11-25 08:51:07 692013-1: using insecure transmission, rate limit: 10 MByte/s
2021-11-25 08:51:07 692013-1: incremental sync 'local-zfs:vm-692013-disk-0' (__replicate_692013-1_1637709304__ => __replicate_692013-1_1637826665__)
2021-11-25 08:51:07 692013-1: using a bandwidth limit of 10000000 bps for transferring 'local-zfs:vm-692013-disk-0'
2021-11-25 08:51:10 692013-1: send from @__replicate_692013-1_1637709304__ to rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__ estimated size is 27.3G
2021-11-25 08:51:10 692013-1: send from @__replicate_692013-1_1637824865__ to rpool/data/vm-692013-disk-0@__replicate_692013-1_1637826665__ estimated size is 1.56M
2021-11-25 08:51:10 692013-1: total estimated size is 27.3G
2021-11-25 08:51:11 692013-1: TIME SENT SNAPSHOT rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:11 692013-1: 08:51:11 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:12 692013-1: 08:51:12 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:13 692013-1: 08:51:13 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:14 692013-1: 08:51:14 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:15 692013-1: 08:51:15 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:16 692013-1: 08:51:16 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:17 692013-1: 08:51:17 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:18 692013-1: 228160 B 222.8 KB 8.27 s 27575 B/s 26.93 KB/s
2021-11-25 08:51:18 692013-1: write: Broken pipe
2021-11-25 08:51:18 692013-1: warning: cannot send 'rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__': signal received
2021-11-25 08:51:18 692013-1: warning: cannot send 'rpool/data/vm-692013-disk-0@__replicate_692013-1_1637826665__': Broken pipe
2021-11-25 08:51:18 692013-1: cannot send 'rpool/data/vm-692013-disk-0': I/O error
2021-11-25 08:51:18 692013-1: command 'zfs send -Rpv -I __replicate_692013-1_1637709304__ -- rpool/data/vm-692013-disk-0@__replicate_692013-1_1637826665__' failed: exit code 1
2021-11-25 08:51:18 692013-1: [ve-ara-22] cannot receive incremental stream: dataset is busy
2021-11-25 08:51:18 692013-1: [ve-ara-22] command 'zfs recv -F -- rpool/data/vm-692013-disk-0' failed: exit code 1
2021-11-25 08:51:18 692013-1: delete previous replication snapshot '__replicate_692013-1_1637826665__' on local-zfs:vm-692013-disk-0
2021-11-25 08:51:18 692013-1: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:vm-692013-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692013-1_1637826665__ -base __replicate_692013-1_1637709304__ | /usr/bin/cstream -t 10000000' failed: exit code 2
 
Hi,
Hello,
same pb for me after upgrade PVE 7.0 to 7.1-5 :
View attachment 31735
Entête
Proxmox
Virtual Environment 7.1-5


Nœud 've-ara-23'










Journaux












()


2021-11-25 08:51:05 692013-1: start replication job
2021-11-25 08:51:05 692013-1: guest => VM 692013, running => 11312
2021-11-25 08:51:05 692013-1: volumes => local-zfs:vm-692013-disk-0
2021-11-25 08:51:05 692013-1: delete stale replication snapshot '__replicate_692013-1_1637824865__' on local-zfs:vm-692013-disk-0
2021-11-25 08:51:05 692013-1: delete stale replication snapshot error: zfs error: cannot destroy snapshot rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__: dataset is busy

2021-11-25 08:51:07 692013-1: freeze guest filesystem
2021-11-25 08:51:07 692013-1: create snapshot '__replicate_692013-1_1637826665__' on local-zfs:vm-692013-disk-0
2021-11-25 08:51:07 692013-1: thaw guest filesystem
2021-11-25 08:51:07 692013-1: using insecure transmission, rate limit: 10 MByte/s
2021-11-25 08:51:07 692013-1: incremental sync 'local-zfs:vm-692013-disk-0' (__replicate_692013-1_1637709304__ => __replicate_692013-1_1637826665__)
2021-11-25 08:51:07 692013-1: using a bandwidth limit of 10000000 bps for transferring 'local-zfs:vm-692013-disk-0'
2021-11-25 08:51:10 692013-1: send from @__replicate_692013-1_1637709304__ to rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__ estimated size is 27.3G
2021-11-25 08:51:10 692013-1: send from @__replicate_692013-1_1637824865__ to rpool/data/vm-692013-disk-0@__replicate_692013-1_1637826665__ estimated size is 1.56M
2021-11-25 08:51:10 692013-1: total estimated size is 27.3G
2021-11-25 08:51:11 692013-1: TIME SENT SNAPSHOT rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:11 692013-1: 08:51:11 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:12 692013-1: 08:51:12 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:13 692013-1: 08:51:13 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:14 692013-1: 08:51:14 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:15 692013-1: 08:51:15 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:16 692013-1: 08:51:16 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:17 692013-1: 08:51:17 270K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__
2021-11-25 08:51:18 692013-1: 228160 B 222.8 KB 8.27 s 27575 B/s 26.93 KB/s
2021-11-25 08:51:18 692013-1: write: Broken pipe
2021-11-25 08:51:18 692013-1: warning: cannot send 'rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__': signal received
2021-11-25 08:51:18 692013-1: warning: cannot send 'rpool/data/vm-692013-disk-0@__replicate_692013-1_1637826665__': Broken pipe
2021-11-25 08:51:18 692013-1: cannot send 'rpool/data/vm-692013-disk-0': I/O error
2021-11-25 08:51:18 692013-1: command 'zfs send -Rpv -I __replicate_692013-1_1637709304__ -- rpool/data/vm-692013-disk-0@__replicate_692013-1_1637826665__' failed: exit code 1
2021-11-25 08:51:18 692013-1: [ve-ara-22] cannot receive incremental stream: dataset is busy
2021-11-25 08:51:18 692013-1: [ve-ara-22] command 'zfs recv -F -- rpool/data/vm-692013-disk-0' failed: exit code 1
2021-11-25 08:51:18 692013-1: delete previous replication snapshot '__replicate_692013-1_1637826665__' on local-zfs:vm-692013-disk-0
2021-11-25 08:51:18 692013-1: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:vm-692013-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692013-1_1637826665__ -base __replicate_692013-1_1637709304__ | /usr/bin/cstream -t 10000000' failed: exit code 2
unfortunately it seems like some of the logs are not displayed.

This
Code:
2021-11-25 08:51:05 692013-1: delete stale replication snapshot error: zfs error: cannot destroy snapshot rpool/data/vm-692013-disk-0@__replicate_692013-1_1637824865__: dataset is busy
sounds like the snapshot is currently being used. Could you check if there is already a replication running in the background with ps aux | grep pvesm.
 
@steph b Please share the output of
Code:
zfs list -s creation -H -p -t snapshot -o name,creation,guid rpool/data/vm-692013-disk-0
on both nodes and
Code:
cat /var/lib/pve-manager/pve-replication-state.json
on the source node and try running the replication again.
 
Hi,
could you share the output of
Code:
qm config 1039
cat /etc/pve/replication.cfg
and on both hosts:
Code:
zfs list -s creation -H -p -t snapshot -o name,creation,guid local-zfs/vm-1039-disk-0

Did you make any other operations with snapshots on the VM, i.e. snapshot/rollback/remove snapshot? Or do you have any other snapshot tool operating on the same dataset?

EDIT: include -p for zfs command.
Hello :)
We have another one that failed on us :(
qm config :
Code:
root@pve-int-XXXX-02:~# qm config 1013
agent: 1,fstrim_cloned_disks=1
boot:  
cores: 2
cpu: host
ide2: none,media=cdrom
memory: 2048
name: XXXXX
net0: e1000=00:15:5D:00:90:1C,bridge=vmbr0,tag=510
numa: 0
ostype: l26
scsi0: local-zfs:vm-1013-disk-1,discard=on,size=100G,ssd=1
smbios1: uuid=a3ddf3d8-32e4-4700-b883-8372bcefbbba
sockets: 1
vmgenid: 296ca537-9401-4b72-b24e-eecaf92a9ac0
/etc/pve/replication.cfg
Code:
local: 1015-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1019-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1020-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1022-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1027-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1028-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1030-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1031-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1032-0
        target pve-XXXXXX-02
        rate 200
        schedule */10
        source pve-XXXXXX-01

local: 1033-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1034-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1036-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1038-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1044-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1045-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1001-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1002-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1003-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1004-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1005-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1006-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1007-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1008-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1009-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1010-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1011-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1012-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1013-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1014-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1017-0
        target pve-XXXXXX-01
        rate 200
        schedule */10
        source pve-XXXXXX-02

local: 1018-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1021-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1024-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1029-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1035-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1040-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1042-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1047-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1046-0
        target pve-XXXXXX-01
        rate 200
        schedule */30
        source pve-XXXXXX-02

local: 1023-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1025-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1026-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1037-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1039-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1041-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1043-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1049-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

local: 1048-0
        target pve-XXXXXX-02
        rate 200
        schedule */30
        source pve-XXXXXX-01

For zfs list :

On host 01 (Destination) :
The command return empty seems that there is no snapshot here

On host 02 (Source) :
The command return empty seems that there is no snapshot here
 
zfs list -s creation -H -p -t snapshot -o name,creation,guid rpool/data/vm-692013-disk-0
root@ve-ara-23:~# zfs list -s creation -H -p -t snapshot -o name,creation,guid rpool/data/vm-692013-disk-0
rpool/data/vm-692013-disk-0@__replicate_692013-1_1637709304__ 1637709314 3530299869585150793
rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__ 1637828470 571574083069769168

root@ve-ara-22:~# zfs list -s creation -H -p -t snapshot -o name,creation,guid rpool/data/vm-692013-disk-0
rpool/data/vm-692013-disk-0@__replicate_692013-1_1637709304__ 1637709314 3530299869585150793
 
cat /var/lib/pve-manager/pve-replication-state.json
root@ve-ara-23:~# cat /var/lib/pve-manager/pve-replication-state.json
{"210":{"local/ve-ara-22":{"storeid_list":["local-zfs"],"last_node":"ve-ara-23","duration":39.558418,"fail_count":0,"last_try":1637823604,"last_sync":1637823604,"last_iteration":1637823604}},"692013":{"local/ve-ara-22":{"last_try":1637828465,"pid":2259707,"error":"command 'set -o pipefail && pvesm export local-zfs:vm-692013-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692013-1_1637826665__ -base __replicate_692013-1_1637709304__ | /usr/bin/cstream -t 10000000' failed: exit code 2","storeid_list":["local-zfs"],"ptime":"13673377","duration":13.553045,"fail_count":12,"last_sync":1637709304,"last_iteration":1637828465,"last_node":"ve-ara-23"}},"692010":{"local/ve-ara-22":{"last_node":"ve-ara-23","last_sync":1637701217,"last_iteration":1637823604,"storeid_list":["local-zfs"],"ptime":"13187253","duration":22.317472,"fail_count":0,"last_try":1637823644,"pid":215508}},"692008":{"local/ve-ara-22":{"last_sync":1637787726,"last_iteration":1637828585,"last_node":"ve-ara-23","last_try":1637828586,"error":"command 'set -o pipefail && pvesm export local-zfs:vm-692008-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692008-0_1637828586__ | /usr/bin/cstream -t 10000000' failed: exit code 2","storeid_list":["local-zfs"],"duration":5.58817,"fail_count":21}}}root@ve-ara-23:~#
 
try running the replication again.
2021-11-25 09:39:05 692013-1: start replication job
2021-11-25 09:39:05 692013-1: guest => VM 692013, running => 11312
2021-11-25 09:39:05 692013-1: volumes => local-zfs:vm-692013-disk-0
2021-11-25 09:39:06 692013-1: delete stale replication snapshot '__replicate_692013-1_1637828465__' on local-zfs:vm-692013-disk-0
2021-11-25 09:39:06 692013-1: delete stale replication snapshot error: zfs error: cannot destroy snapshot rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__: dataset is busy

2021-11-25 09:39:07 692013-1: freeze guest filesystem
2021-11-25 09:39:07 692013-1: create snapshot '__replicate_692013-1_1637829545__' on local-zfs:vm-692013-disk-0
2021-11-25 09:39:08 692013-1: thaw guest filesystem
2021-11-25 09:39:08 692013-1: using insecure transmission, rate limit: 10 MByte/s
2021-11-25 09:39:08 692013-1: incremental sync 'local-zfs:vm-692013-disk-0' (__replicate_692013-1_1637709304__ => __replicate_692013-1_1637829545__)
2021-11-25 09:39:08 692013-1: using a bandwidth limit of 10000000 bps for transferring 'local-zfs:vm-692013-disk-0'
2021-11-25 09:39:10 692013-1: send from @__replicate_692013-1_1637709304__ to rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__ estimated size is 27.3G
2021-11-25 09:39:10 692013-1: send from @__replicate_692013-1_1637828465__ to rpool/data/vm-692013-disk-0@__replicate_692013-1_1637829545__ estimated size is 863K
2021-11-25 09:39:10 692013-1: total estimated size is 27.3G
2021-11-25 09:39:11 692013-1: TIME SENT SNAPSHOT rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:11 692013-1: 09:39:11 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:12 692013-1: 09:39:12 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:13 692013-1: 09:39:13 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:14 692013-1: 09:39:14 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:15 692013-1: 09:39:15 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:16 692013-1: 09:39:16 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:17 692013-1: 09:39:17 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:18 692013-1: 09:39:18 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:19 692013-1: 09:39:19 336K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__
2021-11-25 09:39:20 692013-1: 299424 B 292.4 KB 10.07 s 29738 B/s 29.04 KB/s
2021-11-25 09:39:20 692013-1: write: Broken pipe
2021-11-25 09:39:20 692013-1: warning: cannot send 'rpool/data/vm-692013-disk-0@__replicate_692013-1_1637828465__': signal received
2021-11-25 09:39:20 692013-1: warning: cannot send 'rpool/data/vm-692013-disk-0@__replicate_692013-1_1637829545__': Broken pipe
2021-11-25 09:39:20 692013-1: cannot send 'rpool/data/vm-692013-disk-0': I/O error
2021-11-25 09:39:20 692013-1: command 'zfs send -Rpv -I __replicate_692013-1_1637709304__ -- rpool/data/vm-692013-disk-0@__replicate_692013-1_1637829545__' failed: exit code 1
2021-11-25 09:39:20 692013-1: [ve-ara-22] cannot receive incremental stream: dataset is busy
2021-11-25 09:39:20 692013-1: [ve-ara-22] command 'zfs recv -F -- rpool/data/vm-692013-disk-0' failed: exit code 1
2021-11-25 09:39:20 692013-1: delete previous replication snapshot '__replicate_692013-1_1637829545__' on local-zfs:vm-692013-disk-0
2021-11-25 09:39:20 692013-1: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:vm-692013-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692013-1_1637829545__ -base __replicate_692013-1_1637709304__ | /usr/bin/cstream -t 10000000' failed: exit code 2
 
@romainr could you also share the output of
Code:
cat /var/lib/pve-manager/pve-replication-state.json
on the source node?

@steph b could you check if there is a zfs process still using the snapshot?

Could both of you post your /var/log/syslog as an attachment?
 
@romainr could you also share the output of
Code:
cat /var/lib/pve-manager/pve-replication-state.json
on the source node?

@steph b could you check if there is a zfs process still using the snapshot?

Could both of you post your /var/log/syslog as an attachment?
Thanks for your quick feedback on this case :)

There is the pve-replication-state.json from the source node :
Code:
{
  "1001": {
    "local/pve-XXXXXXX-01": {
      "duration": 8.714084,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830802,
      "fail_count": 0,
      "storeid_list": [
        "local-zfs"
      ],
      "last_iteration": 1637830802,
      "last_sync": 1637830802
    }
  },
  "1002": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_iteration": 1637830802,
      "last_sync": 1637830810,
      "last_try": 1637830810,
      "last_node": "pve-XXXXXXX-02",
      "fail_count": 0,
      "duration": 10.35304
    }
  },
  "1003": {
    "local/pve-XXXXXXX-01": {
      "duration": 10.584112,
      "fail_count": 0,
      "last_try": 1637830821,
      "last_node": "pve-XXXXXXX-02",
      "last_sync": 1637830821,
      "last_iteration": 1637830802,
      "storeid_list": [
        "local-zfs"
      ]
    }
  },
  "1004": {
    "local/pve-XXXXXXX-01": {
      "duration": 7.809741,
      "last_try": 1637830831,
      "last_node": "pve-XXXXXXX-02",
      "fail_count": 0,
      "storeid_list": [
        "local-zfs"
      ],
      "last_iteration": 1637830802,
      "last_sync": 1637830831
    }
  },
  "1005": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830839,
      "duration": 5.175624,
      "last_iteration": 1637830802,
      "last_try": 1637830839,
      "last_node": "pve-XXXXXXX-02",
      "fail_count": 0
    }
  },
  "1006": {
    "local/pve-XXXXXXX-01": {
      "fail_count": 0,
      "last_try": 1637830844,
      "last_node": "pve-XXXXXXX-02",
      "duration": 13.285024,
      "last_iteration": 1637830802,
      "last_sync": 1637830844,
      "storeid_list": [
        "local-zfs"
      ]
    }
  },
  "1007": {
    "local/pve-XXXXXXX-01": {
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830858,
      "fail_count": 0,
      "duration": 9.888611,
      "storeid_list": [
        "local-zfs"
      ],
      "last_iteration": 1637830802,
      "last_sync": 1637830858
    }
  },
  "1008": {
    "local/pve-XXXXXXX-01": {
      "fail_count": 0,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830867,
      "duration": 8.975174,
      "last_iteration": 1637830802,
      "last_sync": 1637830867,
      "storeid_list": [
        "local-zfs"
      ]
    }
  },
  "1009": {
    "local/pve-XXXXXXX-01": {
      "fail_count": 0,
      "last_try": 1637830876,
      "last_node": "pve-XXXXXXX-02",
      "duration": 9.491797,
      "last_sync": 1637830876,
      "last_iteration": 1637830802,
      "storeid_list": [
        "local-zfs"
      ]
    }
  },
  "1010": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830886,
      "last_iteration": 1637830802,
      "duration": 11.88239,
      "last_try": 1637830886,
      "last_node": "pve-XXXXXXX-02",
      "fail_count": 0
    }
  },
  "1011": {
    "local/pve-XXXXXXX-01": {
      "last_iteration": 1637830802,
      "last_sync": 1637830898,
      "storeid_list": [
        "local-zfs"
      ],
      "fail_count": 0,
      "last_try": 1637830898,
      "last_node": "pve-XXXXXXX-02",
      "duration": 5.246522
    }
  },
  "1012": {
    "local/pve-XXXXXXX-01": {
      "duration": 11.203621,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830903,
      "fail_count": 0,
      "storeid_list": [
        "local-zfs"
      ],
      "last_iteration": 1637830802,
      "last_sync": 1637830903
    }
  },
  "1013": {
    "local/pve-XXXXXXX-01": {
      "duration": 4.222696,
      "fail_count": 12,
      "last_try": 1637829842,
      "last_node": "pve-XXXXXXX-02",
      "last_iteration": 1637829842,
      "last_sync": 1637809313,
      "storeid_list": [
        "local-zfs"
      ],
      "error": "command 'set -o pipefail && pvesm export local-zfs:vm-1013-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_1013-0_1637829842__ | /usr/bin/cstream -t 200000000' failed: exit code 2"
    }
  },
  "1014": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637816521,
      "last_iteration": 1637830802,
      "pid": 3335562,
      "ptime": "51227821",
      "duration": 11.184791,
      "last_try": 1637830914,
      "last_node": "pve-XXXXXXX-02",
      "fail_count": 0
    }
  },
  "1017": {
    "local/pve-XXXXXXX-01": {
      "fail_count": 0,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830977,
      "duration": 5.425203,
      "last_sync": 1637830977,
      "last_iteration": 1637830922,
      "storeid_list": [
        "local-zfs"
      ]
    }
  },
  "1018": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830922,
      "last_iteration": 1637830922,
      "duration": 7.305128,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830922,
      "fail_count": 0
    }
  },
  "1021": {
    "local/pve-XXXXXXX-01": {
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830929,
      "fail_count": 0,
      "duration": 5.146017,
      "storeid_list": [
        "local-zfs"
      ],
      "last_iteration": 1637830922,
      "last_sync": 1637830929
    }
  },
  "1024": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830934,
      "last_iteration": 1637830922,
      "duration": 5.394829,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830934,
      "fail_count": 0
    }
  },
  "1029": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830940,
      "duration": 10.811988,
      "last_iteration": 1637830922,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830940,
      "fail_count": 0
    }
  },
  "1035": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830950,
      "last_iteration": 1637830922,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830950,
      "fail_count": 0,
      "duration": 5.209327
    }
  },
  "1040": {
    "local/pve-XXXXXXX-01": {
      "last_try": 1637830956,
      "last_node": "pve-XXXXXXX-02",
      "fail_count": 0,
      "duration": 5.120837,
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830956,
      "last_iteration": 1637830922
    }
  },
  "1042": {
    "local/pve-XXXXXXX-01": {
      "last_try": 1637830961,
      "last_node": "pve-XXXXXXX-02",
      "fail_count": 0,
      "duration": 5.453383,
      "storeid_list": [
        "local-zfs"
      ],
      "last_iteration": 1637830922,
      "last_sync": 1637830961
    }
  },
  "1046": {
    "local/pve-XXXXXXX-01": {
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830966,
      "last_iteration": 1637830922,
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830966,
      "fail_count": 0,
      "duration": 4.97663
    }
  },
  "1047": {
    "local/pve-XXXXXXX-01": {
      "last_node": "pve-XXXXXXX-02",
      "last_try": 1637830971,
      "fail_count": 0,
      "duration": 6.092459,
      "storeid_list": [
        "local-zfs"
      ],
      "last_sync": 1637830971,
      "last_iteration": 1637830922
    }
  }
}

Full /var/log/syslog could contain sensitive information could we send it to you in a private way ?
 
root@ve-ara-23:~# ps aux |grep zfs
root 3474256 0.0 0.0 11984 6304 ? S 10:00 0:00 /usr/bin/ssh -e none -o BatchMode=yes -o HostKeyAlias=ve-ara-22 root@192.168.0.2 -- pvesm import local-zfs:vm-692010-disk-0 zfs tcp://192.168.0.0/24 -with-snapshots 1 -snapshot __replicate_692010-0_1637830838__ -allow-rename 0 -base __replicate_692010-0_1637701217__
root 3474331 0.0 0.0 3836 2784 ? S 10:00 0:00 /bin/bash -c set -o pipefail && pvesm export local-zfs:vm-692010-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692010-0_1637830838__ -base __replicate_692010-0_1637701217__ | /usr/bin/cstream -t 10000000
root 3474332 0.0 0.2 303244 94580 ? S 10:00 0:00 /usr/bin/perl /sbin/pvesm export local-zfs:vm-692010-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692010-0_1637830838__ -base __replicate_692010-0_1637701217__
root 3474336 6.7 0.0 82496 4736 ? Sl 10:00 0:49 zfs send -Rpv -I __replicate_692010-0_1637701217__ -- rpool/data/vm-692010-disk-0@__replicate_692010-0_1637830838__
root 3815341 0.0 0.0 6180 724 pts/0 R+ 10:13 0:00 grep zfs
@steph b could you check if there is a zfs process still using the snapshot?
 
@steph b from the syslog it seems like the error message of "busy dataset" is for a different dataset eventually. If it's the same problem as described here, the actual replication is still happening in the background, it's just that the new scheduler tries to start another one while the old one is still running. And then it cannot delete the (from its perspective) "old" snapshot, because it's still being actively used/synced. This has been fixed in pve-manager 7.1-6, so please try updating.

The other problem described in this thread might be caused for similar reasons, but there, the snapshots that should be used for incremental sync are deleted, indicating a problem with the replication state, i.e.
Code:
2021-11-22 11:17:03 1039-0: delete stale replication snapshot '__replicate_1039-0_1637575917__' on local-zfs:vm-1039-disk-0
2021-11-22 11:17:04 1039-0: (remote_prepare_local_job) delete stale replication snapshot '__replicate_1039-0_1637575917__' on local-zfs:vm-1039-disk-0
Please also try upgrading to pve-manager 7.1-6. I'll investigate further and will work on a patch that would allow replication to recover if the replication state is invalid.

EDIT: Note that it is still possible to run into the issue if there are active replications while upgrading, because the buggy version is still running during the upgrade.
 
Last edited:
update nodes to 7.1-6 but same pb :
2021-11-25 11:43:00 692013-1: start replication job
2021-11-25 11:43:00 692013-1: guest => VM 692013, running => 11312
2021-11-25 11:43:00 692013-1: volumes => local-zfs:vm-692013-disk-0
2021-11-25 11:43:01 692013-1: delete stale replication snapshot '__replicate_692013-1_1637836747__' on local-zfs:vm-692013-disk-0
2021-11-25 11:43:01 692013-1: delete stale replication snapshot error: zfs error: cannot destroy snapshot rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__: dataset is busy

2021-11-25 11:43:02 692013-1: freeze guest filesystem
2021-11-25 11:43:02 692013-1: create snapshot '__replicate_692013-1_1637836980__' on local-zfs:vm-692013-disk-0
2021-11-25 11:43:02 692013-1: thaw guest filesystem
2021-11-25 11:43:02 692013-1: using insecure transmission, rate limit: 10 MByte/s
2021-11-25 11:43:02 692013-1: incremental sync 'local-zfs:vm-692013-disk-0' (__replicate_692013-1_1637709304__ => __replicate_692013-1_1637836980__)
2021-11-25 11:43:02 692013-1: using a bandwidth limit of 10000000 bps for transferring 'local-zfs:vm-692013-disk-0'
2021-11-25 11:43:05 692013-1: send from @__replicate_692013-1_1637709304__ to rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__ estimated size is 27.3G
2021-11-25 11:43:05 692013-1: send from @__replicate_692013-1_1637836747__ to rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836980__ estimated size is 859K
2021-11-25 11:43:05 692013-1: total estimated size is 27.3G
2021-11-25 11:43:06 692013-1: TIME SENT SNAPSHOT rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__
2021-11-25 11:43:06 692013-1: 11:43:06 619K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__
2021-11-25 11:43:07 692013-1: 11:43:07 619K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__
2021-11-25 11:43:08 692013-1: 11:43:08 619K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__
2021-11-25 11:43:09 692013-1: 11:43:09 619K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__
2021-11-25 11:43:10 692013-1: 11:43:10 619K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__
2021-11-25 11:43:11 692013-1: 11:43:11 619K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__
2021-11-25 11:43:12 692013-1: 11:43:12 619K rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__
2021-11-25 11:43:12 692013-1: 586936 B 573.2 KB 7.97 s 73612 B/s 71.89 KB/s
2021-11-25 11:43:12 692013-1: write: Broken pipe
2021-11-25 11:43:12 692013-1: warning: cannot send 'rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836747__': signal received
2021-11-25 11:43:12 692013-1: warning: cannot send 'rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836980__': Broken pipe
2021-11-25 11:43:12 692013-1: cannot send 'rpool/data/vm-692013-disk-0': I/O error
2021-11-25 11:43:12 692013-1: command 'zfs send -Rpv -I __replicate_692013-1_1637709304__ -- rpool/data/vm-692013-disk-0@__replicate_692013-1_1637836980__' failed: exit code 1
2021-11-25 11:43:12 692013-1: [ve-ara-22] cannot receive incremental stream: dataset is busy
2021-11-25 11:43:12 692013-1: [ve-ara-22] command 'zfs recv -F -- rpool/data/vm-692013-disk-0' failed: exit code 1
2021-11-25 11:43:12 692013-1: delete previous replication snapshot '__replicate_692013-1_1637836980__' on local-zfs:vm-692013-disk-0
2021-11-25 11:43:13 692013-1: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:vm-692013-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692013-1_1637836980__ -base __replicate_692013-1_1637709304__ | /usr/bin/cstream -t 10000000' failed: exit code 2
 
and for second VM :



2021-11-25 11:47:07 692008-0: start replication job
2021-11-25 11:47:07 692008-0: guest => VM 692008, running => 7037
2021-11-25 11:47:07 692008-0: volumes => local-zfs:vm-692008-disk-0
2021-11-25 11:47:09 692008-0: freeze guest filesystem
2021-11-25 11:47:09 692008-0: create snapshot '__replicate_692008-0_1637837227__' on local-zfs:vm-692008-disk-0
2021-11-25 11:47:09 692008-0: thaw guest filesystem
2021-11-25 11:47:09 692008-0: using insecure transmission, rate limit: 10 MByte/s
2021-11-25 11:47:09 692008-0: full sync 'local-zfs:vm-692008-disk-0' (__replicate_692008-0_1637837227__)
2021-11-25 11:47:09 692008-0: using a bandwidth limit of 10000000 bps for transferring 'local-zfs:vm-692008-disk-0'
2021-11-25 11:47:12 692008-0: full send of rpool/data/vm-692008-disk-0@__replicate_692008-0_1637837227__ estimated size is 86.7G
2021-11-25 11:47:12 692008-0: total estimated size is 86.7G
2021-11-25 11:47:12 692008-0: 1180 B 1.2 KB 0.71 s 1651 B/s 1.61 KB/s
2021-11-25 11:47:12 692008-0: write: Broken pipe
2021-11-25 11:47:12 692008-0: warning: cannot send 'rpool/data/vm-692008-disk-0@__replicate_692008-0_1637837227__': signal received
2021-11-25 11:47:12 692008-0: cannot send 'rpool/data/vm-692008-disk-0': I/O error
2021-11-25 11:47:12 692008-0: command 'zfs send -Rpv -- rpool/data/vm-692008-disk-0@__replicate_692008-0_1637837227__' failed: exit code 1
2021-11-25 11:47:12 692008-0: [ve-ara-22] volume 'rpool/data/vm-692008-disk-0' already exists
2021-11-25 11:47:12 692008-0: delete previous replication snapshot '__replicate_692008-0_1637837227__' on local-zfs:vm-692008-disk-0
2021-11-25 11:47:12 692008-0: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:vm-692008-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_692008-0_1637837227__ | /usr/bin/cstream -t 10000000' failed: exit code 2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!