Successfully upgraded one member now. I'm seeing that the 6.4 node is happily still replicating to 7.1, however the 7.1 containers are giving replication errors. Is that expected until I upgrade the other member as well? These containers were migrated away and back after the upgrade.
Code:
2021-12-09 18:23:00 101-0: start replication job
2021-12-09 18:23:00 101-0: guest => CT 101, running => 1
2021-12-09 18:23:00 101-0: volumes => vpool:subvol-101-disk-0
2021-12-09 18:23:00 101-0: freeze guest filesystem
2021-12-09 18:23:00 101-0: create snapshot '__replicate_101-0_1639066980__' on vpool:subvol-101-disk-0
2021-12-09 18:23:01 101-0: thaw guest filesystem
2021-12-09 18:23:01 101-0: using secure transmission, rate limit: none
2021-12-09 18:23:01 101-0: incremental sync 'vpool:subvol-101-disk-0' (__replicate_101-0_1639065887__ => __replicate_101-0_1639066980__)
2021-12-09 18:23:01 101-0: send from @__replicate_101-0_1639065887__ to vpool/subvol-101-disk-0@__replicate_101-0_1639066980__ estimated size is 1.43M
2021-12-09 18:23:01 101-0: total estimated size is 1.43M
2021-12-09 18:23:01 101-0: Unknown option: snapshot
2021-12-09 18:23:01 101-0: 400 unable to parse option
2021-12-09 18:23:01 101-0: pvesm import <volume> <format> <filename> [OPTIONS]
2021-12-09 18:23:01 101-0: warning: cannot send 'vpool/subvol-101-disk-0@__replicate_101-0_1639066980__': signal received
2021-12-09 18:23:01 101-0: cannot send 'vpool/subvol-101-disk-0': I/O error
2021-12-09 18:23:01 101-0: command 'zfs send -Rpv -I __replicate_101-0_1639065887__ -- vpool/subvol-101-disk-0@__replicate_101-0_1639066980__' failed: exit code 1
2021-12-09 18:23:01 101-0: delete previous replication snapshot '__replicate_101-0_1639066980__' on vpool:subvol-101-disk-0
2021-12-09 18:23:01 101-0: end replication job with error: command 'set -o pipefail && pvesm export vpool:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_101-0_1639066980__ -base __replicate_101-0_1639065887__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxima' root@192.168.1.20 -- pvesm import vpool:subvol-101-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_101-0_1639066980__ -allow-rename 0' failed: exit code 255
Also getting a systemd-journald error:
Code:
[ 854.703639] systemd-journald[768]: Failed to set ACL on /var/log/journal/7b03e054c6c9476a89182535d801f5e7/user-1001.journal, ignoring: Operation not supported