Proxmox VE 7.1 released!

Hi,
we've upgraded our cluster to 7.1-5 and we're encountering a lot of problems over VM running Win2012R2 and Win2019 Server... take bake to kernel 5.11 and everything is working again!
Which CPU model and (if applicable) server vendor/model is in use in your setup?
 
The Prune Simulator is not yet adapted to Proxmox 7.1, is it?
It's using the same algorithm (semantically, not the same identical implementation but) as Proxmox Backup Server since the beginning...

The backups of a calendar week are to be kept for all VMs.
So keep-daily set to 6 (or 6 if you want to be strictly one week) for the first job and keep-weekly (or keep-daily, it doesn't matter much if there's only one backup per week anyway) to 1 for the second job?

The schedule simulator in the GUI makes little sense.
If it still shows on which days which backups are deleted, it is usable.
What do you mean exactly?
 
So keep-daily set to 6 (or 6 if you want to be strictly one week) for the first job and keep-weekly (or keep-daily, it doesn't matter much if there's only one backup per week anyway) to 1 for the second job?
Hi there,

that doesn't work because Job2 deletes all of Job1's backups.
Job2 says "one backup per week is specified" and not all backups of a week.

The only solution so far was.
1. Job VM100 Monday to Saturday 4:00 a.m. snapshoot backup 7 days.
2. Job VM100 Sunday 04:00 am stop backup for 7 days
3. Job VM101 -... Sunday 04:00 am stop backup for 1 week.

Because the people at Proxmox don't want to deal with calendar days or weeks. A calendar week goes from Monday to Sunday or from Sunday to Saturday, depending on the counting method.
And if I want to keep a week's backups, I want to keep the backups from every day of that week.

What do you mean exactly?

This shows which days and which backups are deleted.

Screenshot 2021-11-18 133442.jpg
Many greetings
Detlef Paschke
 
Last edited:
that doesn't work because Job2 deletes all of Job1's backups.
Job2 says "one backup per week is specified" and not all backups of a week.
Ah ok, yeah rechecking your job description: that cannot be described by the prune settings, that just clashes.
You still could have only two jobs:

VM 100 -> daily 04:00, keep-daily = 7
VM 101,102,103,104 -> sun 04:00, keep-weekly = 1

As jobs scheduled at the same time will still run (queued) after the other is done, so not much difference there.

This shows which days and which backups are deleted.
This is a schedule simulator for grasping when a backup will happen, it's not a prune one.

You can just use the PBS prune simulator for that: https://pbs.proxmox.com/docs/prune-simulator/
 
Ah ok, yeah rechecking your job description: that cannot be described by the prune settings, that just clashes.
You still could have only two jobs:

VM 100 -> daily 04:00, keep-daily = 7
VM 101,102,103,104 -> sun 04:00, keep-weekly = 1
Hi there,

I have to insert an additional job in between because I want to have a stop backup of VM100 on Sunday.
Therefore I need three jobs although two jobs could be enough if Proxmox handled days, weeks, months and years correctly.

Except for a few visual improvements, Proxmox 7.1 has unfortunately not changed anything in the backup schedule. At Proxmox the definition "week" unfortunately still remains, one backup per week and unfortunately not all backups of a week.

Many greetings
Detlef Paschke
 
Therefore I need three jobs although two jobs could be enough if Proxmox handled days, weeks, months and years correctly.
What is not handled correctly? And what is the issue with having three jobs, for your rather odd setup? It's one time setup only anyway...

Except for a few visual improvements, Proxmox 7.1 has unfortunately not changed anything in the backup schedule. At Proxmox the definition "week" unfortunately still remains, one backup per week and unfortunately not all backups of a week.
That's by design and you mean backup pruning/retention and not the backup schedule, two different things.
The backup schedule went from a fixed time and day-of-week selector to fully flexible, calendar event based schedule, wouldn't call that no change ;)

Any how, let's get back to the Proxmox VE 7.1 release, for other topics open a separate thread (albeit the prune keep-retentions design probably won't be changed soon)
 
Last edited:
Now that 7.1 has moved to ZFS 2.1.1 I am getting the below status on my ZFS raid-1 boot rpool and other non-root zpools.

Code:
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.

Any reason not to zpool upgrade all including the rpool, on current installs not using grub?
 
Any reason not to zpool upgrade all including the rpool, on current installs not using grub?
Well, basically what Dunuin wrote:
I would always wait some weeks before upgrading the ZFS pools. As soon as you upgrade your pool you won't be able to use older OpenZFS versions any longher. So if you for example run into problems with PVE7.1 you also wouldn't be able to use that pool with a fresh install of PVE 7.0.

IOW, if you run into an regression with 5.13 based kernel it's not as easy as just reboot into a older 5.11 as workaround.
If you're sure 5.13 is working fine in your setup there's not much of a reason left.
 
  • Like
Reactions: vesalius
Sorry did not see that. OK I'll wait until Proxmox gives me a few 5.13 kernels that I can revert to. Running a zpool upgrade only list "draid" as a new available feature so no rush or requirement for that from my end.
 
Hi, I just wanted to report that none of our VMs with a SATA disk do not work anymore after upgrade to 7.1. Where should I report this?

BC
 
Hi, I just wanted to report that none of our VMs with a SATA disk do not work anymore after upgrade to 7.1. Where should I report this?

BC
Hi,

this is known since relative recently, many thanks for your report anyhow! It affects mainly Windows VMs (but those are the ones that use SATA with a higher probability due to it working out of the box) and can be worked around by switching the disk's Async IO mode to threads (for cache = write back/through) or native (for cache = off, none or direct sync), respectively.
1637255182175-png.31526


Note that it seems to really be the full combination of Kernel 5.13 and SATA and io_uring and-maybe Windows, changing anything of the former three makes it work again, cannot say for sure that it has to be windows too though.

Note that SATA is really not the best choice for a VM's disk bus in general, rather use (VirtIO-) SCSI for best performance and feature set, Windows VirtIO support is available through https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers
 
Last edited:
Hi,

Which CPU model and (if applicable) server vendor/model is in use in your setup?
Hi there,
we've 6 Dell R730 with dual Intel Xeon E5-2620v4. We've problems with VM using Win2012R2 and Win2019, instead with VM with Linux we don't have problems.
For Windows VMs we're using VirtIO SCSI controller with SCSI hard drive, no cache.
We've noticed that, if we have VM disk over a NFS external storage, we don't have problems, instead if we use local storage we've a lot of problems!

Any fix for these problems?

Best regards,
Paolo
 
Hi,

Since I upgraded to 7.1, the interface corresponding to my Sonnet Technologies 10G SFP+ Thunderbolt adapter does not show up.

There is a kernel message indicating that it is available:

[ 3.758181] thunderbolt 0-1: Sonnet Technologies, Inc Solo 10G SFP+ Thunderbolt 3 Edition

but that is all.

To get it back, I have to unplug the adapter and replug it, then the module atlantic is autoloaded, and after that I can set the device up, add it to the bridge and up the bridge to have networking back.

Should be a regression with kernel 5.13.19-2
 
Last edited:
  • New backup scheduler daemon for flexible scheduling options

    it was so bad

    used to be with the clock, not anymore
 
Did you hit the refresh button before trying to upgrade? Do you have the non-subscription repository active under Repositories?
it takes a backup manually, no problem, but it doesn't take it automatically
sdwgfehrjçtk.PNG


because they did it on monday friday as nonsense

I guess they don't have a saturday or sunday
 
Updated one machine to 7.1 and on host kernel 5.13.19-1-pve VMs (linux based) are working for few minutes
and then end up stuck on disk access. restarting VM and after few minutes problem is back. restarting host - the same.
Back to 5.11.22-7-pve and problem is gone. Note that when running 5.13 I don't see any issues with disk access on host itself.

VMs are using qcow2 images that reside on ext4 fs on top of device mapper. VirtioSCSI as controller.

Xeon E5620

Looking at diff between dmesg for 5.11 and 5.13 host kernel - no issues or important differences there. No errors or warnings
in host kernel log/dmesg.

From sample VM:
Code:
Nov 18 15:11:05 moon kernel: INFO: task jbd2/vda3-8:247 blocked for more than 122 seconds.
Nov 18 15:11:05 moon kernel:       Tainted: G                T 5.15.2-1 #1
Nov 18 15:11:05 moon kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 18 15:11:05 moon kernel: task:jbd2/vda3-8     state:D stack:    0 pid:  247 ppid:     2 flags:0x00004000
Nov 18 15:11:05 moon kernel: Call Trace:
Nov 18 15:11:05 moon kernel:  ? bit_wait+0x60/0x60
Nov 18 15:11:05 moon kernel:  __schedule+0x2ae/0x1360
Nov 18 15:11:05 moon kernel:  ? bit_wait+0x60/0x60
Nov 18 15:11:05 moon kernel:  schedule+0x47/0xb0
Nov 18 15:11:05 moon kernel:  io_schedule+0x42/0x70
Nov 18 15:11:05 moon kernel:  bit_wait_io+0xd/0x60
Nov 18 15:11:05 moon kernel:  __wait_on_bit+0x2a/0x90
Nov 18 15:11:05 moon kernel:  out_of_line_wait_on_bit+0x92/0xc0
Nov 18 15:11:05 moon kernel:  ? var_wake_function+0x30/0x30
Nov 18 15:11:05 moon kernel:  jbd2_journal_commit_transaction+0x11bc/0x1a50 [jbd2 810699dabe75d12c39e68deeb749aed65e17466c]
Nov 18 15:11:05 moon kernel:  kjournald2+0xdb/0x2b0 [jbd2 810699dabe75d12c39e68deeb749aed65e17466c]
Nov 18 15:11:05 moon kernel:  ? wait_woken+0x70/0x70
Nov 18 15:11:05 moon kernel:  ? load_superblock.part.0+0xc0/0xc0 [jbd2 810699dabe75d12c39e68deeb749aed65e17466c]
Nov 18 15:11:05 moon kernel:  kthread+0x12a/0x150
Nov 18 15:11:05 moon kernel:  ? set_kthread_struct+0x50/0x50
Nov 18 15:11:05 moon kernel:  ret_from_fork+0x22/0x30
Nov 18 15:11:05 moon kernel: INFO: task syslog-ng:10744 blocked for more than 122 seconds.
Nov 18 15:11:05 moon kernel:       Tainted: G                T 5.15.2-1 #1
Nov 18 15:11:05 moon kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 18 15:11:05 moon kernel: task:syslog-ng       state:D stack:    0 pid:10744 ppid:  1218 flags:0x00000000
Nov 18 15:11:05 moon kernel: Call Trace:
Nov 18 15:11:05 moon kernel:  ? bit_wait+0x60/0x60
Nov 18 15:11:05 moon kernel:  __schedule+0x2ae/0x1360
Nov 18 15:11:05 moon kernel:  ? bit_wait+0x60/0x60
Nov 18 15:11:05 moon kernel:  schedule+0x47/0xb0
Nov 18 15:11:05 moon kernel:  io_schedule+0x42/0x70
Nov 18 15:11:05 moon kernel:  bit_wait_io+0xd/0x60
Nov 18 15:11:05 moon kernel:  __wait_on_bit+0x2a/0x90
Nov 18 15:11:05 moon kernel:  out_of_line_wait_on_bit+0x92/0xc0
Nov 18 15:11:05 moon kernel:  ? var_wake_function+0x30/0x30
Nov 18 15:11:05 moon kernel:  do_get_write_access+0x26f/0x390 [jbd2 810699dabe75d12c39e68deeb749aed65e17466c]
Nov 18 15:11:05 moon kernel:  jbd2_journal_get_write_access+0x69/0x90 [jbd2 810699dabe75d12c39e68deeb749aed65e17466c]
Nov 18 15:11:05 moon kernel:  __ext4_journal_get_write_access+0x84/0x1b0 [ext4 181bd769ec63a9e87df4bfc5223c995bdeb0ca1b]
Nov 18 15:11:05 moon kernel:  ext4_reserve_inode_write+0xa0/0xf0 [ext4 181bd769ec63a9e87df4bfc5223c995bdeb0ca1b]
Nov 18 15:11:05 moon kernel:  __ext4_mark_inode_dirty+0x77/0x240 [ext4 181bd769ec63a9e87df4bfc5223c995bdeb0ca1b]
Nov 18 15:11:05 moon kernel:  ? __ext4_journal_start_sb+0x110/0x130 [ext4 181bd769ec63a9e87df4bfc5223c995bdeb0ca1b]
Nov 18 15:11:05 moon kernel:  ext4_dirty_inode+0x5a/0x90 [ext4 181bd769ec63a9e87df4bfc5223c995bdeb0ca1b]
Nov 18 15:11:05 moon kernel:  __mark_inode_dirty+0x133/0x2e0
Nov 18 15:11:05 moon kernel:  generic_update_time+0x7b/0x100
Nov 18 15:11:05 moon kernel:  file_update_time+0x14f/0x170
Nov 18 15:11:05 moon kernel:  ? generic_write_checks+0x67/0xd0
Nov 18 15:11:05 moon kernel:  ext4_buffered_write_iter+0x5d/0x190 [ext4 181bd769ec63a9e87df4bfc5223c995bdeb0ca1b]
Nov 18 15:11:05 moon kernel:  do_iter_readv_writev+0x13c/0x1b0
Nov 18 15:11:05 moon kernel:  do_iter_write+0x93/0x1f0
Nov 18 15:11:05 moon kernel:  vfs_writev+0xfe/0x1b0
Nov 18 15:11:05 moon kernel:  do_writev+0x77/0x130
Nov 18 15:11:05 moon kernel:  do_syscall_64+0x66/0xd0
Nov 18 15:11:05 moon kernel:  ? do_syscall_64+0xe/0xd0
Nov 18 15:11:05 moon kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
 
VMs are using qcow2 images that reside on ext4 fs on top of device mapper. VirtioSCSI as controller.
But they use VirtIO block, so the SCSI controller does not matter that much, I'd suggest trying either using SCSI for the disk (detach and re-attach) and/or changing the Async Mode from io_uring as described in a post above for some other issue (not sure if directly related): https://forum.proxmox.com/threads/proxmox-ve-7-1-released.99847/post-431438
Xeon E5620
Hmm, quite an old CPU (released Q1'2010) does the system has the newest firmware version installed?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!