Proxmox VE 7.1 released!

It seems to work, but I'm getting hit with a "The user verified even through discouragement" error . It might be windows hello acting up
Ah yeah, that is a bug in a dependency library we use for webauthn, there's a newer version of that library which fixes the issue and we'll update to that soon.
 
  • Like
Reactions: Taledo
Nice work guys. Looking forward to the new scheduler!

I was really hoping for zfs replication to work with encrypted zfs zpool's!
Thats also on my wishlist for 7.2. And in general booting from a encrypted ZFS pool would be nice to have as an installer option.
 
Great work! I wonder if someone have also a problem to start existing Windows VMs? Even creating new VMs fails during the Windows installation process. My Linux VMs are running flawless.
 
same problem here.
Ditto!

It gives me BSODs such as:
HYPERVISOR ERROR or CLOCK WATCHDOG TIMEOUT

I tried switching my Windows 11 VMs machine to qc35-qemu-6.1, changing the OS type to 11/2022 as well as recreating my TPM and EFI resources but nothing seems to help.


UPDATE:
Just switched back to kernel 5.11.22-7-pve from the newer one 5.13.19-1-pve and now my Windows VMs boot.
 
Last edited:
  • Like
Reactions: Colin 't Hart
Hi everyone, first time posting here, hope everyone is doing great.

After upgrading to PVE 7.1, ZFS pool suggest doing a "zpool upgrade".
From the CLI the output is:
# zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.

Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(7) for details.

Note that the pool 'compatibility' feature can be used to inhibit
feature upgrades.

POOL FEATURE
---------------
rpool
draid

After checking again with #zpool status shows pending upgrade.
Reading the ZFS MAN https://www.freebsd.org/cgi/man.cgi?query=zpool-features&sektion=7&n=1 doesn't seems to help... I understand what does that feature but doesn't help on the upgrade process.

The pool is pretty simple, just two SSD on mirror. On previous upgrades I didn't have any inconvenient doing zpool upgrade but this time I'm kind of worried of breaking the pool.

Am I missing something? Should I upgrade directly?

Any help is appreciated. Thanks !
 

Attachments

  • pve71_zpoolupgrade.jpg
    pve71_zpoolupgrade.jpg
    137.9 KB · Views: 18
Ditto!

It gives me BSODs such as:
HYPERVISOR ERROR or CLOCK WATCHDOG TIMEOUT

I tried switching my Windows 11 VMs machine to qc35-qemu-6.1, changing the OS type to 11/2022 as well as recreating my TPM and EFI resources but nothing seems to help.


UPDATE:
Just switched back to kernel 5.11.22-7-pve from the newer one 5.13.19-1-pve and now my Windows VMs boot.

I rolled back to 5.11.22-7-pve and now my VMS boot, thanks!
 
I rolled back to 5.11.22-7-pve and now my VMS boot, thanks!
Ditto!

It gives me BSODs such as:
HYPERVISOR ERROR or CLOCK WATCHDOG TIMEOUT
I wonder if someone have also a problem to start existing Windows VMs? Even creating new VMs fails during the Windows installation process.

Can you please share the VM config and some HW details, mainly CPU and server vendor/model (if any).

We'd had no issues with existing nor new Windows VMs in our tests here, so the issue may be tied to specific CPUs and/or VM config settings.
You can also open a new forum thread and mention my username wit @ to get me aware of it and avoid crowding the general release thread.
 
Am I missing something? Should I upgrade directly?
I would always wait some weeks before upgrading the ZFS pools. As soon as you upgrade your pool you won't be able to use older OpenZFS versions any longher. So if you for example run into problems with PVE7.1 you also wouldn't be able to use that pool with a fresh install of PVE 7.0.
 
My ZFS Replication stopped working with 7.1
I deleted the replication and added it fresh, but the problem remains.:

Logs stop mid replicating:
2021-11-18 10:55:26 103-0: 10:55:26 3.71G zfspool/vm-103-disk-0@__replicate_103-0_1637229244__
2021-11-18 10:55:27 103-0: 10:55:27 3.76G zfspool/vm-103-disk-0@__replicate_103-0_163

and syslog:
syslog │Nov 18 10:56:04 pve-01 pvescheduler[787192]: ERROR: can't lock file '/var/lock/pvesr.lck' - got timeout │
syslog │Nov 18 10:56:04 pve-01 pvescheduler[787192]: got shutdown request, signal running jobs to stop │
syslog │Nov 18 10:56:04 pve-01 pvescheduler[787192]: server stopped
syslog │Nov 18 11:00:06 pve-01 pvescheduler[1002858]: send/receive failed, cleaning up snapshot(s)..
syslog │Nov 18 11:00:06 pve-01 pvescheduler[1002858]: 103-0: got unexpected replication job error - command 'set -o pipefail && pvesm export zfspool:vm-103-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_103-0_1637229604__ | /usr/bin/cstream -t 50000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve-02' root@10.0.0.6 -- pvesm import zfspool:vm-103-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_103-0_1637229604__ -allow-rename 0' failed: exit code 255

After that, the webUI has "State OK" with no last sync date.

How can i fix this ?
 
Last edited:
can you post a bit more of the log?

edit: syslog i mean
sure:

Nov 18 10:54:43 pve-01 pmxcfs[1962]: [status] notice: received log │
│Nov 18 10:55:33 pve-01 pvedaemon[2118]: <root@pam> successful auth for user 'monitoring@pve' │
│Nov 18 10:55:34 pve-01 systemd[1]: Started Session 1617 of user root. │
│Nov 18 10:55:34 pve-01 systemd[1]: session-1617.scope: Succeeded. │
│Nov 18 10:55:38 pve-01 pmxcfs[1962]: [status] notice: received log │
│Nov 18 10:56:04 pve-01 pvescheduler[787192]: ERROR: can't lock file '/var/lock/pvesr.lck' - got timeout │
│Nov 18 10:56:04 pve-01 pvescheduler[787192]: got shutdown request, signal running jobs to stop │
│Nov 18 10:56:04 pve-01 pvescheduler[787192]: server stopped │
│Nov 18 10:56:15 pve-01 pvedaemon[2117]: worker exit │
│Nov 18 10:56:15 pve-01 pvedaemon[2115]: worker 2117 finished │
│Nov 18 10:56:15 pve-01 pvedaemon[2115]: starting 1 worker(s) │
│Nov 18 10:56:15 pve-01 pvedaemon[2115]: worker 837807 started │
│Nov 18 10:56:20 pve-01 pvedaemon[2116]: <root@pam> successful auth for user 'monitoring@pve' │
│Nov 18 10:57:11 pve-01 pvedaemon[837807]: <root@pam> successful auth for user 'monitoring@pve' │
│Nov 18 10:57:20 pve-01 pmxcfs[1962]: [status] notice: received log │
│Nov 18 10:57:42 pve-01 pmxcfs[1962]: [status] notice: received log │
│Nov 18 10:57:57 pve-01 pvedaemon[837807]: <root@pam> successful auth for user 'monitoring@pve' │
│Nov 18 10:58:03 pve-01 pmxcfs[1962]: [status] notice: received log │
│Nov 18 10:58:34 pve-01 pmxcfs[1962]: [status] notice: received log │
│Nov 18 10:59:19 pve-01 systemd[1]: Started Session 1618 of user root. │
│Nov 18 10:59:19 pve-01 systemd[1]: session-1618.scope: Succeeded. │
│Nov 18 10:59:32 pve-01 pmxcfs[1962]: [status] notice: received log │
│Nov 18 10:59:44 pve-01 pvedaemon[2118]: <root@pam> successful auth for user 'monitoring@pve' │
│Nov 18 11:00:06 pve-01 pvescheduler[1002858]: send/receive failed, cleaning up snapshot(s).. │
│Nov 18 11:00:06 pve-01 pvescheduler[1002858]: 103-0: got unexpected replication job error - command 'set -o pipefail && pvesm export zfspool:vm-103-disk-0 zf│
│Nov 18 11:00:06 pve-01 postfix/pickup[3556232]: E80ED1A249A: uid=0 from=<root> │
│Nov 18 11:00:06 pve-01 postfix/cleanup[1003308]: E80ED1A249A: message-id=<20211118100006.E80ED1A249A@pve-01.becker> │
│Nov 18 11:00:06 pve-01 postfix/qmgr[2060]: E80ED1A249A: from=<root@pve-01.becker>, size=773, nrcpt=1 (queue active) │
│Nov 18 11:00:07 pve-01 pvemailforward[1003311]: forward mail to <michael@MAILADDRESSREMOVED> │
│Nov 18 11:00:07 pve-01 postfix/pickup[3556232]: 3DCE71A249B: uid=65534 from=<root> │
│Nov 18 11:00:07 pve-01 postfix/cleanup[1003308]: 3DCE71A249B: message-id=<20211118100006.E80ED1A249A@pve-01.becker> │
│Nov 18 11:00:07 pve-01 postfix/qmgr[2060]: 3DCE71A249B: from=<root@pve-01.becker>, size=941, nrcpt=1 (queue active) │
│Nov 18 11:00:07 pve-01 postfix/local[1003310]: E80ED1A249A: to=<root@pve-01.becker>, orig_to=<root>, relay=local, delay=0.32, delays=0.02/0/0/0.3, dsn=2.0.0,│
│Nov 18 11:00:07 pve-01 postfix/qmgr[2060]: E80ED1A249A: removed │
│Nov 18 11:00:07 pve-01 postfix/smtp[1003314]: 3DCE71A249B: to=<michael@MAILADDRESSREMOVED>, relay=mail.MAILADDRESSREMOVED[185.124.72.121]:25, delay=0.31, delays=0.01/0│
│Nov 18 11:00:07 pve-01 postfix/qmgr[2060]: 3DCE71A249B: removed │
│Nov 18 11:00:13 pve-01 pvedaemon[2118]: worker exit │
│Nov 18 11:00:13 pve-01 pvedaemon[2115]: worker 2118 finished │
│Nov 18 11:00:13 pve-01 pvedaemon[2115]: starting 1 worker(s)
 
Hi,
My ZFS Replication stopped working with 7.1
I deleted the replication and added it fresh, but the problem remains.:

Logs stop mid replicating:
2021-11-18 10:55:26 103-0: 10:55:26 3.71G zfspool/vm-103-disk-0@__replicate_103-0_1637229244__
2021-11-18 10:55:27 103-0: 10:55:27 3.76G zfspool/vm-103-disk-0@__replicate_103-0_163

and syslog:
syslog │Nov 18 10:56:04 pve-01 pvescheduler[787192]: ERROR: can't lock file '/var/lock/pvesr.lck' - got timeout │
syslog │Nov 18 10:56:04 pve-01 pvescheduler[787192]: got shutdown request, signal running jobs to stop │
syslog │Nov 18 10:56:04 pve-01 pvescheduler[787192]: server stopped
syslog │Nov 18 11:00:06 pve-01 pvescheduler[1002858]: send/receive failed, cleaning up snapshot(s)..
syslog │Nov 18 11:00:06 pve-01 pvescheduler[1002858]: 103-0: got unexpected replication job error - command 'set -o pipefail && pvesm export zfspool:vm-103-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_103-0_1637229604__ | /usr/bin/cstream -t 50000000 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve-02' root@10.0.0.6 -- pvesm import zfspool:vm-103-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_103-0_1637229604__ -allow-rename 0' failed: exit code 255

After that, the webUI has "State OK" with no last sync date.

How can i fix this ?
could you also post the complete replication log? Did the upgrade happen while the replication was running?
 
Hi,

could you also post the complete replication log? Did the upgrade happen while the replication was running?
I removed the replication already, as it froze my VMs.
The Update happend a few hours prior.

I have a replication every 30min for every VM. All worked just fine after the update, but one had state ok, but the last successfull sync was a before the Update.
So i deleted the replication and thats where the problems started. After re-adding it, all other replications also got the same behaviour as the first.
Also my Disk-IO-Wait was through the roof.
Now, with every replication disabled, all works just fine, but i cannot migrate my VMs anymore or add a replication without freezing my VMs. I also restarted all nodes again with no other behaviour

I just readded one, and i report back in a few minutes
 
Last edited:
this is the complete log and the Size of the disk in the remote storage is 0B:

Code:
2021-11-18 11:26:00 112-0: start replication job
2021-11-18 11:26:00 112-0: guest => VM 112, running => 0
2021-11-18 11:26:00 112-0: volumes => zfspool:vm-112-disk-0
2021-11-18 11:26:01 112-0: create snapshot '__replicate_112-0_1637231160__' on zfspool:vm-112-disk-0
2021-11-18 11:26:01 112-0: using secure transmission, rate limit: 50 MByte/s
2021-11-18 11:26:01 112-0: full sync 'zfspool:vm-112-disk-0' (__replicate_112-0_1637231160__)
2021-11-18 11:26:01 112-0: using a bandwidth limit of 50000000 bps for transferring 'zfspool:vm-112-disk-0'
2021-11-18 11:26:02 112-0: full send of zfspool/vm-112-disk-0@__replicate_112-0_1637231160__ estimated size is 32.5G
2021-11-18 11:26:02 112-0: total estimated size is 32.5G
2021-11-18 11:26:03 112-0: TIME        SENT   SNAPSHOT zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:03 112-0: 11:26:03   71.6M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:04 112-0: 11:26:04    119M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:05 112-0: 11:26:05    167M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:06 112-0: 11:26:06    215M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:07 112-0: 11:26:07    262M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:08 112-0: 11:26:08    310M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:09 112-0: 11:26:09    358M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:10 112-0: 11:26:10    405M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:11 112-0: 11:26:11    453M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:12 112-0: 11:26:12    501M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:13 112-0: 11:26:13    549M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:14 112-0: 11:26:14    596M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:15 112-0: 11:26:15    644M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:16 112-0: 11:26:16    692M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:17 112-0: 11:26:17    739M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:18 112-0: 11:26:18    787M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:19 112-0: 11:26:19    835M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:20 112-0: 11:26:20    882M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:21 112-0: 11:26:21    930M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:22 112-0: 11:26:22    978M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:23 112-0: 11:26:23   1.00G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:24 112-0: 11:26:24   1.05G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:25 112-0: 11:26:25   1.09G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:26 112-0: 11:26:26   1.14G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:27 112-0: 11:26:27   1.19G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:28 112-0: 11:26:28   1.23G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:29 112-0: 11:26:29   1.28G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:30 112-0: 11:26:30   1.33G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:31 112-0: 11:26:31   1.37G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:32 112-0: 11:26:32   1.42G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:33 112-0: 11:26:33   1.47G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:34 112-0: 11:26:34   1.51G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:35 112-0: 11:26:35   1.56G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:36 112-0: 11:26:36   1.61G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:37 112-0: 11:26:37   1.65G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:38 112-0: 11:26:38   1.70G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:39 112-0: 11:26:39   1.75G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:40 112-0: 11:26:40   1.79G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:41 112-0: 11:26:41   1.84G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:42 112-0: 11:26:42   1.89G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:43 112-0: 11:26:43   1.93G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:44 112-0: 11:26:44   1.98G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:45 112-0: 11:26:45   2.03G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:46 112-0: 11:26:46   2.07G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:47 112-0: 11:26:47   2.12G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:48 112-0: 11:26:48   2.17G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:49 112-0: 11:26:49   2.21G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:50 112-0: 11:26:50   2.26G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:51 112-0: 11:26:51   2.31G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:52 112-0: 11:26:52   2.35G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:53 112-0: 11:26:53   2.40G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:54 112-0: 11:26:54   2.44G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:55 112-0: 11:26:55   2.49G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:56 112-0: 11:26:56   2.54G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:57 112-0: 11:26:57   2.59G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:58 112-0: 11:26:58   2.63G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:59 112-0: 11:26:59   2.68G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:00 112-0: 11:27:00   2.72G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:01 112-0: 11:27:01   2.77G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:02 112-0: 11:27:02   2.82G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:03 112-0: 11:27:03   2.86G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:04 112-0: 11:27:04   2.91G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:05 112-0: 11:27:05   2.96G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:06 112-0: 11:27:06   3.00G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:07 112-0: 11:27:07   3.05G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:08 112-0: 11:27:08   3.10G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:09 112-0: 11:27:09   3.14G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:10 112-0: 11:27:10   3.19G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:11 112-0: 11:27:11   3.24G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:12 112-0: 11:27:12   3.28G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:13 112-0: 11:27:13   3.33G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:14 112-0: 11:27:14   3.38G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:15 112-0: 11:27:15   3.42G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:16 112-0: 11:27:16   3.47G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:17 112-0: 11:27:17   3.52G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:18 112-0: 11:27:18   3.56G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:19 112-0: 11:27:19   3.61G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:20 112-0: 11:27:20   3.66G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:21 112-0: 11:27:21   3.70G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:22 112-0: 11:27:22   3.75G   zfspool/vm-112-disk-0@__replicate_112-0_163

and this in syslog:
│Nov 18 11:27:57 pve-02 pmxcfs[1928]: [status] notice: received log │
│Nov 18 11:28:00 pve-02 pvescheduler[1237623]: ERROR: can't lock file '/var/lock/pvesr.lck' - got timeout │
│Nov 18 11:28:00 pve-02 pvescheduler[1237623]: got shutdown request, signal running jobs to stop │
│Nov 18 11:28:00 pve-02 pvescheduler[1237623]: server stopped


and if i try to remove the replication, this is in the syslog and i have to manually remove the disk:
│Nov 18 11:30:01 pve-02 pvescheduler[1352510]: zfs error: cannot destroy snapshot zfspool/vm-112-disk-0@__replicate_112-0_1637231160__: dataset is busy │
 
Last edited by a moderator:
  • New backup scheduler daemon for flexible scheduling options
  • Backup retention

Hi there,

I like the new backup strategy, if it works too. I only see a few differences to the old backup.

I have two backup jobs.
1st job: VM 100 Monday to Saturday at 4:00 a.m. Snapshot backup, storage for 7 days.
2nd job: VM 100,101,102,103,104 Sunday at 4:00 am Stop backup, storage for 7 days.

The backups of a calendar week are to be kept for all VMs.
This was not possible before because Proxmox did not count the days in which there was no backup and thus kept the backups of the last 7 weeks for the VM 101-104.

The way I wanted it to never work. Will this work now and are my settings still correct?

The Prune Simulator is not yet adapted to Proxmox 7.1, is it?
Because after this my wish does not work.
https://pbs.proxmox.com/docs/prune-simulator/

The schedule simulator in the GUI makes little sense.
If it still shows on which days which backups are deleted, it is usable.

Many greetings
Detlef Paschke
 
Hi there,
we've upgraded our cluster to 7.1-5 and we're encountering a lot of problems over VM running Win2012R2 and Win2019 Server... take bake to kernel 5.11 and everything is working again!

Regards,
Paolo
 
this is the complete log and the Size of the disk in the remote storage is 0B:

Code:
2021-11-18 11:26:00 112-0: start replication job
2021-11-18 11:26:00 112-0: guest => VM 112, running => 0
2021-11-18 11:26:00 112-0: volumes => zfspool:vm-112-disk-0
2021-11-18 11:26:01 112-0: create snapshot '__replicate_112-0_1637231160__' on zfspool:vm-112-disk-0
2021-11-18 11:26:01 112-0: using secure transmission, rate limit: 50 MByte/s
2021-11-18 11:26:01 112-0: full sync 'zfspool:vm-112-disk-0' (__replicate_112-0_1637231160__)
2021-11-18 11:26:01 112-0: using a bandwidth limit of 50000000 bps for transferring 'zfspool:vm-112-disk-0'
2021-11-18 11:26:02 112-0: full send of zfspool/vm-112-disk-0@__replicate_112-0_1637231160__ estimated size is 32.5G
2021-11-18 11:26:02 112-0: total estimated size is 32.5G
2021-11-18 11:26:03 112-0: TIME        SENT   SNAPSHOT zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:03 112-0: 11:26:03   71.6M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:04 112-0: 11:26:04    119M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:05 112-0: 11:26:05    167M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:06 112-0: 11:26:06    215M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:07 112-0: 11:26:07    262M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:08 112-0: 11:26:08    310M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:09 112-0: 11:26:09    358M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:10 112-0: 11:26:10    405M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:11 112-0: 11:26:11    453M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:12 112-0: 11:26:12    501M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:13 112-0: 11:26:13    549M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:14 112-0: 11:26:14    596M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:15 112-0: 11:26:15    644M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:16 112-0: 11:26:16    692M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:17 112-0: 11:26:17    739M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:18 112-0: 11:26:18    787M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:19 112-0: 11:26:19    835M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:20 112-0: 11:26:20    882M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:21 112-0: 11:26:21    930M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:22 112-0: 11:26:22    978M   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:23 112-0: 11:26:23   1.00G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:24 112-0: 11:26:24   1.05G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:25 112-0: 11:26:25   1.09G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:26 112-0: 11:26:26   1.14G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:27 112-0: 11:26:27   1.19G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:28 112-0: 11:26:28   1.23G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:29 112-0: 11:26:29   1.28G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:30 112-0: 11:26:30   1.33G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:31 112-0: 11:26:31   1.37G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:32 112-0: 11:26:32   1.42G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:33 112-0: 11:26:33   1.47G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:34 112-0: 11:26:34   1.51G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:35 112-0: 11:26:35   1.56G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:36 112-0: 11:26:36   1.61G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:37 112-0: 11:26:37   1.65G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:38 112-0: 11:26:38   1.70G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:39 112-0: 11:26:39   1.75G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:40 112-0: 11:26:40   1.79G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:41 112-0: 11:26:41   1.84G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:42 112-0: 11:26:42   1.89G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:43 112-0: 11:26:43   1.93G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:44 112-0: 11:26:44   1.98G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:45 112-0: 11:26:45   2.03G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:46 112-0: 11:26:46   2.07G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:47 112-0: 11:26:47   2.12G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:48 112-0: 11:26:48   2.17G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:49 112-0: 11:26:49   2.21G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:50 112-0: 11:26:50   2.26G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:51 112-0: 11:26:51   2.31G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:52 112-0: 11:26:52   2.35G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:53 112-0: 11:26:53   2.40G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:54 112-0: 11:26:54   2.44G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:55 112-0: 11:26:55   2.49G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:56 112-0: 11:26:56   2.54G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:57 112-0: 11:26:57   2.59G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:58 112-0: 11:26:58   2.63G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:26:59 112-0: 11:26:59   2.68G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:00 112-0: 11:27:00   2.72G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:01 112-0: 11:27:01   2.77G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:02 112-0: 11:27:02   2.82G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:03 112-0: 11:27:03   2.86G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:04 112-0: 11:27:04   2.91G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:05 112-0: 11:27:05   2.96G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:06 112-0: 11:27:06   3.00G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:07 112-0: 11:27:07   3.05G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:08 112-0: 11:27:08   3.10G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:09 112-0: 11:27:09   3.14G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:10 112-0: 11:27:10   3.19G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:11 112-0: 11:27:11   3.24G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:12 112-0: 11:27:12   3.28G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:13 112-0: 11:27:13   3.33G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:14 112-0: 11:27:14   3.38G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:15 112-0: 11:27:15   3.42G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:16 112-0: 11:27:16   3.47G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:17 112-0: 11:27:17   3.52G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:18 112-0: 11:27:18   3.56G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:19 112-0: 11:27:19   3.61G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:20 112-0: 11:27:20   3.66G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:21 112-0: 11:27:21   3.70G   zfspool/vm-112-disk-0@__replicate_112-0_1637231160__
2021-11-18 11:27:22 112-0: 11:27:22   3.75G   zfspool/vm-112-disk-0@__replicate_112-0_163

and this in syslog:
│Nov 18 11:27:57 pve-02 pmxcfs[1928]: [status] notice: received log │
│Nov 18 11:28:00 pve-02 pvescheduler[1237623]: ERROR: can't lock file '/var/lock/pvesr.lck' - got timeout │
│Nov 18 11:28:00 pve-02 pvescheduler[1237623]: got shutdown request, signal running jobs to stop │
│Nov 18 11:28:00 pve-02 pvescheduler[1237623]: server stopped
Thank you for the report! There is indeed a (hopefully mostly cosmetic) issue when the pvescheduler service is restarted while a replication is running (as happens when upgrading the pve-manager package). What happens is that the script handling the replication is terminated (which is why it shows as an error in the UI and the log stops), but I think the actual replication should still be running in the background. We'll make sure to fix this.

and if i try to remove the replication, this is in the syslog and i have to manually remove the disk:
│Nov 18 11:30:01 pve-02 pvescheduler[1352510]: zfs error: cannot destroy snapshot zfspool/vm-112-disk-0@__replicate_112-0_1637231160__: dataset is busy │
At that time the replication might still have been running. If it happens again, please check with ps aux | grep pvesm.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!