Proxmox VE 7.1 released!

Hi, Fabian, thanks very. I had wait long time but not help, I had limited the bandwidth to 15M but this vDisk size only 30G, and it was working well in before. I notice the error status due to replication-job rerun automatically(I setup two hours schedule and it rerun in several minutes). I will try to unlimit bandwidth and see if improve. Thanks.
Hi, I just try replication-job with unlimit bandwidth and it is much better then before. Thanks very much.
 
Hi,

just to add here. Upgraded to 7.1-5. There are 2 Windows VMs that boot, then after around 2 minutes become unresponsive. We have removed the CD drives completely but doesnt make any difference. HDs are using Virtio.

thanks
 
Hello,
the same here, @itobin but no Windows involved. Upgraded to 7.1-5 from 7.0.x. Issues start after the upgrade.
I have only one VM (HomeAssistant) and it becomes unresponsive after 2 minutes, sometimes 1 hour.
I'm using VirtIO.

Thanks
 
Hello,
the same here, @itobin but no Windows involved. Upgraded to 7.1-5 from 7.0.x. Issues start after the upgrade.
I have only one VM (HomeAssistant) and it becomes unresponsive after 2 minutes, sometimes 1 hour.
I'm using VirtIO.

Thanks
its a bit weird, we have a few other Windows VMs and they are not having this issue. The 2 Windows VMs in question are non production, the rest are production (luckily). I cant see anything that is different in the config. I will also add the ones that are working have IDE CD ROMs.

Initially the non production VMs had SATA CD ROMs which we then removed, however has made no difference.
 
Setting Async IO: threads and cache to Write back seems to have stablised it.

We have had to do this on the HD even though its set to VirtoIO.
 
  • Like
Reactions: maba
Hi,
do not use SATA to boot Windows VMs.
Please change to IDE (Microsoft recommendation) or to VirtIO (My recommendation)

Best regards,
Falk

Hi Falk,

thanks for the hint - I read the "best practice" section after setting up my first WinVM on pve. As I had no issues (at least pre-7.1), I kept installing with SATA - could you tell me what the downsides are exactly or point me to the related KB?

Thanks in advance

Best,
oernst
 
Hi,

this is known since relative recently, many thanks for your report anyhow! It affects mainly Windows VMs (but those are the ones that use SATA with a higher probability due to it working out of the box) and can be worked around by switching the disk's Async IO mode to threads (for cache = write back/through) or native (for cache = off, none or direct sync), respectively.
1637255182175-png.31526


Note that it seems to really be the full combination of Kernel 5.13 and SATA and io_uring and-maybe Windows, changing anything of the former three makes it work again, cannot say for sure that it has to be windows too though.

Note that SATA is really not the best choice for a VM's disk bus in general, rather use (VirtIO-) SCSI for best performance and feature set, Windows VirtIO support is available through https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers
Hi T., your suggestions do not seem to work. The only working remedy seemed to be to switch to IDE, so that is what we did. I will launch a support ticker for this so you guys are aware of what is going on.

Cheers,

BC
 
Last edited:
Unfortunately kernel 5.13 has a bug that does not allow nested virtualization in Win 11. I had to revert back to kernel 5.11 to make this work. Regardless, thank you very much for all the efforts you put to keep improving Proxmox :)
 
Last edited:
Hi Falk,

thanks for the hint - I read the "best practice" section after setting up my first WinVM on pve. As I had no issues (at least pre-7.1), I kept installing with SATA - could you tell me what the downsides are exactly or point me to the related KB?
Ich weiss nicht mehr wo genau Ich das gelesen hatte, aber Microsoft empfiehlt aus Kompatibilitätsgründen für jeden nicht näher spezifizierten Hypervisor, IDE für das OS und E1000 NIC. Damit funktioniert es immer.
 
  • Like
Reactions: oernst
Morning,

Just trying to get my head around the issues I am facing with Windows Server VM's across multiple hosts - 2x Dell PowerEdge R720 & 1x Dell Poweredge T430.

Are there actually known issues with using the SCSI Controller? the main issue I am finding right now is if i reboot a guest VM then I just simply get a black screen and on occassion I will see "Guest has not initialized the display (yet)" - I end up having to kill the PID most of the time via the shell.

Some machines will also randomly reset with very little information provided within the guest as to what has happened, I.E no minidump created etc.

pve-manager/7.1-5/6fe299a0 (running kernel: 5.13.19-1-pve)

This is the config of one of my 2019 VM's just deployed from Template that is giving me this issue right now:

agent: 1 bios: ovmf boot: order=scsi0;net0 cores: 2 cpu: host efidisk0: local-lvm:vm-130-disk-0,size=4M hotplug: disk,network,usb,memory,cpu machine: pc-q35-6.1 memory: 10240 name: proddb2 net0: virtio=46:C4:96:32:16:89,bridge=vmbr0 numa: 1 ostype: win10 scsi0: local-lvm:vm-130-disk-1,backup=0,cache=writeback,discard=on,size=40G scsihw: virtio-scsi-pci smbios1: uuid=f5834244-b54d-4744-8f13-0c1494d8059c sockets: 2 vmgenid: d6f984d4-a46d-4558-a760-93f4b9be8e09

Cheers
 
I too can confirm some issues

Upgraded to 7.1 last night and 2 Windows 2016 instances froze randomly. Proxmox showed them as running however performance indicators were frozen in place and vnc was unable to connect. Stop/Start would help at least for a short while.
At first I figured there might be a problem with virtio drivers so I tried updating them. The installer got stuck and therefore broke the entire system. I restored from backup and then found this thread. Turns out both of them use SATA so I switched to IDE which has helped so far. However 6 Windows 2019 Instances did not crash but operated quite sluggish. I removed all leftover "CD-Drives" and that fixed that particular problem. All Linux VMs were unaffected. One in particular uses sata but so far no i/o errors or any other problems.

I think we'll hold off any further upgrades for now.
 
Upgraded to 7.1 last night
Do you run already latest kernel - pve-kernel-5.13.19-1-pve: 5.13.19-3 , released yesterday?

In any case, please provide the following:

- pveversion -v
- qm config VMID
- information about your hardware (CPU and Mainboard, is your mainboard bios up2date?)
 
Do you run already latest kernel - pve-kernel-5.13.19-1-pve: 5.13.19-3 , released yesterday?

In any case, please provide the following:

- pveversion -v
- qm config VMID
- information about your hardware (CPU and Mainboard, is your mainboard bios up2date?)
Yes I am running 5.13.19-1-pve PVE 5.13.19-3. It was fetched last night. Only one package "libpve-access-control" became available since then

boot: order=ide0
cores: 4
ide0: SSD-Space:110/vm-110-disk-0.qcow2,size=300G
ide1: HDD-Space:110/vm-110-disk-0.qcow2,size=100G
ide2: SSD-Space:110/vm-110-disk-1.qcow2,size=100G
machine: pc-i440fx-5.2
memory: 20480
name:
net0: virtio=6E:07:4F:25:38:22,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=09168f63-25da-4757-8568-3cdea228dc4d
sockets: 1
vmgenid: c691fe77-b3aa-4d24-b41f-15aea8e49b42

Those were previously set to sata. Since IDE Switch it has been working fine. Its a Windows 2016 VM. The twin VM which also crashed has the same settings except not having 3 drives but 2

Host: ProLiant DL380 Gen10 2x Intel(R) Xeon(R) Silver 4110 CPU 8 Core -
Bios Recent except Network ROM cause we had some issues with it.

I could try to reproduce the problem on a twin setup at another customer. However most of their Windows VMs run with sata so I would rather not do another night shift.


Update:
It seems to only affect a VM that has a Sata drive as boot volume. Another Windows 2019 VM uses 3 IDE + additional 3 SATA. Bootvolume being one of the former

I'll try to reproduce this tonight and switch everything to ide on another setup. There is one VM that is not in production. I will leave it untouched and see what happens.
 
Last edited:
Yes I am running 5.13.19-1-pve PVE 5.13.19-3. It was fetched last night. Only one package "libpve-access-control" became available since then

boot: order=ide0
cores: 4
ide0: SSD-Space:110/vm-110-disk-0.qcow2,size=300G
ide1: HDD-Space:110/vm-110-disk-0.qcow2,size=100G
ide2: SSD-Space:110/vm-110-disk-1.qcow2,size=100G
machine: pc-i440fx-5.2
memory: 20480
name:
net0: virtio=6E:07:4F:25:38:22,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=09168f63-25da-4757-8568-3cdea228dc4d
sockets: 1
vmgenid: c691fe77-b3aa-4d24-b41f-15aea8e49b42

Those were previously set to sata. Since IDE Switch it has been working fine. Its a Windows 2016 VM. The twin VM which also crashed has the same settings except not having 3 drives but 2

Host: ProLiant DL380 Gen10 2x Intel(R) Xeon(R) Silver 4110 CPU 8 Core -
Bios Recent except Network ROM cause we had some issues with it.

I could try to reproduce the problem on a twin setup at another customer. However most of their Windows VMs run with sata so I would rather not do another night shift.


Update:
It seems to only affect a VM that has a Sata drive as boot volume. Another Windows 2019 VM uses 3 IDE + additional 3 SATA. Bootvolume being one of the former

I'll try to reproduce this tonight and switch everything to ide on another setup. There is one VM that is not in production. I will leave it untouched and see what happens.
So I upgraded 2 additional machines. Wherever possible I set everything to IDE and left one Windows 10 VM to fail. For about 2 hours everything worked fine and then one Windows 2019 VM crashed again.
After Stop/Start it crashed again about 5 minutes later. The VM which was set up to fail did not however. This 2019 in particular uses 6 drives IDE 0-3 Sata 0-1. This is the only difference to the other 6 Windows 2019 VMs

After I detached/reattached the drives I started the VM and it has been running since. No errors. The Windows 10 VM has noch crashed yet. Same goes for the previous customer setup. No errors so far.

Looking through logs I could only find little information

Nov 26 00:27:09 proxmox-0 kernel: nfs: server XXX.XXX.XXX.XXX not responding, still trying
Nov 26 00:27:10 proxmox-0 kernel: call_decode: 253 callbacks suppressed
Nov 26 00:27:10 proxmox-0 kernel: nfs: server XXX.XXX.XXX.XXX OK
Nov 26 00:27:10 proxmox-0 kernel: nfs: server XXX.XXX.XXX.XXX OK
Nov 26 00:27:10 proxmox-0 kernel: nfs: server XXX:XXX:XXX:XXX OK

The log is flooded with those messages before the VM crashed. Though the other node is having no such issues with the exact same NAS

Once the vm crashes, all thats logged is

Nov 26 02:05:01 proxmox-0 pvestatd[1472]: VM 116 qmp command failed - VM 116 qmp command 'query-proxmox-support' failed - unable to connect to VM 116 qmp socket - timeout after 31 retries
 
we're excited to announce the release of Proxmox Virtual Environment 7.1. It's based on Debian 11.1 "Bullseye" but using a newer Linux kernel 5.13, QEMU 6.1, LXC 4.0, Ceph 16.2.6, and OpenZFS 2.1. and countless enhancements and bugfixes.

Proxmox Virtual Environment brings several new functionalities and many improvements for management tasks in the web interface: support for Windows 11 including TPM, enhanced creation wizard for VM/container, ability to set backup retention policies per backup job in the GUI, and a new scheduler daemon supporting more flexible schedules..

Here is a selection of the highlights
  • Debian 11.1 "Bullseye", but using a newer Linux kernel 5.13
  • LXC 4.0, Ceph 16.2.6, QEMU 6.1, and OpenZFS 2.1
  • VM wizard with defaults for Windows 11 (q35, OVMF, TPM)
  • New backup scheduler daemon for flexible scheduling options
  • Backup retention
  • Protection flag for backups
  • Two-factor Authentication: WebAuthn, recovery keys, multiple factors for a single account
  • New container templates: Fedora, Ubuntu, Alma Linux, Rocky Linux
  • and many more enhancements, bugfixes, etc.
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.1

Press release
https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-1-released

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-7-1

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
http://download.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

We want to shout out a big THANK YOU to our active community for all your intensive feedback, testing, bug reporting and patch submitting!

FAQ
Q: Can I upgrade Proxmox VE 7.0 to 7.1 via GUI?
A: Yes.

Q: Can I upgrade Proxmox VE 6.4 to 7.1 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I install Proxmox VE 7.1 on top of Debian 11.1 "Bullseye"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.1 with Ceph Octopus/Pacific?
A: This is a two step process. First, you have to upgrade Proxmox VE from 6.4 to 7.1, and afterwards upgrade Ceph from Octopus to Pacific. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
Hi, what's better? To set the retention on the backup job or set it in the storage configuration? I'm using PBS and NFS storage.

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!