Proxmox VE 7.3 released!

Hi,

I'm having a networking issue since the upgrade to 7.3:

I have a network bridge with no physical port running
Code:
auto vmbr1
iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
#LAN Virtual Bridge 1

which I use for networking between some VMs and LXC containers.

This used to work fine but since the upgrade to 7.3 it only works after a fresh boot of the proxmox machine and stops working after a few minutes. Sometimes I can't even access the console of these machines/containers via the Proxmox Web UI.

Has something changed in that space with 7.3?
I could not find any helpful log entries yet, it just stops working.

Thanks
 
Last edited:
Hi,

I'm having a networking issue since the upgrade to 7.3:

I have a network bridge with no physical port running
Code:
auto vmbr1
iface vmbr1 inet manual
        bridge-ports none
        bridge-stp off
        bridge-fd 0
#LAN Virtual Bridge 1

which I use for networking between some VMs and LXC containers.

This used to work fine but since the upgrade to 7.3 it only works after a fresh boot of the proxmox machine and stops working after a few minutes. Sometimes I can't even access the console of these machines/containers via the Proxmox Web UI.

Has something changed in that space with 7.3?
I could not find any helpful log entries yet, it just stops working.

Thanks
I had the same configuration in a fresh installation with a VM KVM Win2019 and everything is ok!
 
Well that may be. But I am seeing this issue regularly - I believe, since the update.

Since I can't even ping between VMs/LXCs inside this bridge/network then it's presumably somehow related to proxmox?
 
we're very excited to announce the release of Proxmox Virtual Environment 7.3. It's based on Debian 11.5 "Bullseye" but using a newer Linux kernel 5.15 or 5.19, QEMU 7.1, LXC 5.0.0, and ZFS 2.1.6.

Proxmox Virtual Environment 7.3 comes with initial support for Cluster Resource Scheduling, enables updates for air-gapped systems with the new Proxmox Offline Mirror tool, and has improved UX for various management tasks, as well as interesting storage technologies like ZFS dRAID and countless enhancements and bugfixes.

Here is a selection of the highlights
  • Debian 11.5 "Bullseye", but using a newer Linux kernel 5.15 or 5.19
  • QEMU 7.1, LXC 5.0.0, and ZFS 2.1.6
  • Ceph Quincy 17.2.5 and Ceph Pacific 16.2.10; heuristical checks to see if it is safe to stop or remove a service instance (MON, MDS, OSD)
  • Initial support for a Cluster Resource Scheduler (CRS)
  • Proxmox Offline Mirror - https://pom.proxmox.com/
  • Tagging virtual guests in the web interface
  • CPU pinning: Easier affinity control using taskset core lists
  • New container templates: Fedora, Ubuntu, Alma Linux, Rocky Linux
  • Reworked USB devices: can now be hot-plugged
  • ZFS dRAID pools
  • Proxmox Mobile: based on Flutter 3.0
  • And many more enhancements.
As always, we have included countless bugfixes and improvements on many places; see the release notes for all details.

Release notes
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.3

Press release
https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-7-3

Video tutorial
https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-7-3

Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso

Documentation
https://pve.proxmox.com/pve-docs

Community Forum
https://forum.proxmox.com

Bugtracker
https://bugzilla.proxmox.com

Source code
https://git.proxmox.com

We want to shout out a big THANK YOU to our active community for all your intensive feedback, testing, bug reporting and patch submitting!

FAQ
Q: Can I upgrade Proxmox VE 7.0 or 7.1 or 7.2 to 7.3 via GUI?
A: Yes.

Q: Can I upgrade Proxmox VE 6.4 to 7.3 with apt?
A: Yes, please follow the upgrade instructions on https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0

Q: Can I install Proxmox VE 7.3 on top of Debian 11.x "Bullseye"?
A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye

Q: Can I upgrade my Proxmox VE 6.4 cluster with Ceph Octopus to 7.3 with Ceph Octopus/Pacific/Quincy?
A: This is a three step process. First, you have to upgrade Proxmox VE from 6.4 to 7.3, and afterwards upgrade Ceph from Octopus to Pacific and to Quincy. There are a lot of improvements and changes, so please follow exactly the upgrade documentation:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific
https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy

Q: Where can I get more information about feature updates?
A: Check the roadmap, forum, the mailing list, and/or subscribe to our newsletter.

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
Hi

I'm from Iran
I am very thankful because of your fantastic work... .
I am using Proxmox VE from 6 months ago and today i was updated it to 7.3

Nice Work

Thank YOU

With peace and love,
BEST REGARDS
 
Hi everyone,

sorry I'm a bit late to the party.

I'm running 7.2-11 and few weeks ago I had to pin the kernel to version 5.13.19-6-pve because of some live-migration issues.

Do you know if I can safely upgrade to 7.3? Do you advice upgrading to the provided 5.15.74-1 kernel or better sticking with the current one?

Kind regards
 
It depends a bit on your workload and underlying storage, but in general it can help to reduce some IO pressure or even hangs from the guest, especially in backup or snapshot situations - which both can put an additional amount of IO load on the storage and guest in general. We know of setups that saw a big improvement with IO-Threads and some, where enabling it basically had no significant (visible) impact. Pitfalls are not that many, but IO-Threads is still something with relatively a lot of change going on, albeit they became pretty stable in recent PVE releases. Also, if an issue really pops up, they can always be disabled until fixed.
If I understand it correctly it makes no difference if I use a single disk (scsi) only? It makes only sense with more than one disk.
 
If I understand it correctly it makes no difference if I use a single disk (scsi) only? It makes only sense with more than one disk.
No, it also makes a difference for a VM with just one disk, as otherwise IO is handled my the QEMU main-thread that has then less time for other work (some of which may delay/block the VM in the worst case, and especially if IO intensive tasks like backups are running).
 
No, it also makes a difference for a VM with just one disk, as otherwise IO is handled my the QEMU main-thread that has then less time for other work (some of which may delay/block the VM in the worst case, and especially if IO intensive tasks like backups are running).
Thanks for that explanation. :)
The previous version of the docs imply that it's really just meant for when you have more than one virtual disk. Is there a way to submit a ticket/issue for that to be updated?

(I don't like to make work for other people, but I definitely don't feel competent enough with how Proxmox works to try editing the docs. :p )
 
Hi,
Thanks for that explanation. :)
The previous version of the docs imply that it's really just meant for when you have more than one virtual disk. Is there a way to submit a ticket/issue for that to be updated?
this is usually done via https://bugzilla.proxmox.com/, but if it's really minor you can also just post it here. What section of the docs are you referring to exactly?
 
Hi,

this is usually done via https://bugzilla.proxmox.com/, but if it's really minor you can also just post it here. What section of the docs are you referring to exactly?

Not the OP, but at least this part might be nice to be rephrased:
IO Thread
The option IO Thread can only be used when using a disk with the VirtIO controller, or with the SCSI controller, when the emulated controller type is VirtIO SCSI single. With this enabled, Qemu creates one I/O thread per storage controller, rather than a single thread for all I/O. This can increase performance when multiple disks are used and each disk has its own storage controller.
https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_hard_disk
 
  • Like
Reactions: Neobin
Anyone else with LUKS problems after the update from today/yesterday? Just did a reboot because of the firmware upgrade and now my LUKS encrypted LVM-Thin doesn`t automount anymore.

Edit:
Looks like my mdadm software raid naming changed:
What was "/dev/md/j3710:md_1" is now "/dev/md/md_1". So crypttab can't find the device anymore.

Edit:
Changed crypttab to "/dev/md/md_1" and now everything works fine again. Is there a way to prevent this problem in the future? I thought using "/dev/md/j3710:md_1" should be unique and better than directly pointing the crypttab to "/dev/md127"?
 
Last edited:
  • Like
Reactions: Stoiko Ivanov
Looks like my mdadm naming changed:
What was "/dev/md/j3710:md_1" is now "/dev/md/md_1". So crypttab can't find the device anymore.
It's been quite a while since I dealt with mdraid - so my memory might be wrong - but I think you can add your arrays to /etc/mdadm/mdadm.conf for stable naming (but I don't remember having seen mdraid device nodes with <prefix:> so not sure if this would help here)

anything in the logs?

In any case - it might be better to open a new thread (and add a link here for others who run into this) - feel free to mention me and I'll take a look!
 
  • Like
Reactions: Dunuin
Hi All,

I am in the process of migrating our servers to 7.3. I have VM that is based on FreeBSD running an software application that requires an external USB disk key to load. Using 7.2, I have no issues with attaching the USB device in Hardware, but now on the upgraded (7.3) PVE, same hardware config, it is not detecting the USB key. I saw in the release notes about a new USB controller, could that be causing this issue for me?

Appreciate if you could provide any sort of pointers.
 
Hi All,

I am in the process of migrating our servers to 7.3. I have VM that is based on FreeBSD running an software application that requires an external USB disk key to load. Using 7.2, I have no issues with attaching the USB device in Hardware, but now on the upgraded (7.3) PVE, same hardware config, it is not detecting the USB key. I saw in the release notes about a new USB controller, could that be causing this issue for me?

Appreciate if you could provide any sort of pointers.
can you post your vm config?
 
Hi dcsapak,
Thanks for your reply. Below is the config:

Code:
boot: order=sata0;net0
cores: 2
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=6.2.0,ctime=1668575345
name: XXXX
net0: e1000=A2:2A:36:D9:A2:35,bridge=vmbr0,firewall=1,tag=111
net1: e1000=E2:B6:17:75:33:3F,bridge=vmbr0,firewall=1,tag=222
net2: e1000=F6:EE:73:08:04:C8,bridge=vmbr0,firewall=1,tag=333
numa: 0
onboot: 1
ostype: l26
sata0: CEPH:vm-227-disk-0,discard=on,size=30536M,ssd=1
smbios1: uuid=723ffe80-01fa-4440-970b-e3ea92f1e8d4
sockets: 2
usb0: host=0403:c580
vmgenid: 7208ddc0-1ffb-4c71-8c44-78e81b07aa10

I've upgraded all other hosts, but left one host on 7.2 just because of this issue.
 
ostype: l26
if you have a freebsd machine you should not use linux as ostype, as that will activate features intended for linux guests (such as the new usb controller, which works in linux but may not in freebsd (i didn't check))
 
  • Like
Reactions: gseeley
if you have a freebsd machine you should not use linux as ostype, as that will activate features intended for linux guests (such as the new usb controller, which works in linux but may not in freebsd (i didn't check))
Thanks, but which one should I be using? As I mentioned earlier, this stopped working once I upgraded to 7.3. Same config is working on 7.2.
 
Thanks, but which one should I be using? As I mentioned earlier, this stopped working once I upgraded to 7.3. Same config is working on 7.2.
yes we automatically changed the controller for new machine types for linux and windows (>=8).
for freebsd i'd use the default ostype 'other'
 
yes we automatically changed the controller for new machine types for linux and windows (>=8).
for freebsd i'd use the default ostype 'other'
Hi,
Thank you, after my last post I searched and found out about the 'Other' OS type, I used that and it worked!! I have some other freebsd machines running. I will need to go through all of them.

Thanks again for your time!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!