Proxmox VE 7.3 released!

Oct 8, 2019
103
22
23
There already is one for HA, the migrate shutdown-policy. And yes, once set it will be used as better selection for target nodes.
For non-HA it's planned but needs much more extra work (especially if you consider more than one narrow use-case with everything shared already, which could just use HA anyway), once the mechanism is there the new CRS stuff can be used there too, but as said, it's "just" the selection, not the underlying mechanism to handle such stuff - that's done by the HA stack that got all the required information already.
Hi Thomas,

Unless something has changed, the current HA does not provide what we need. It migrates workloads when the node goes down and then brings them back when the node comes us. That's fine if the node crashes or something, but not if we want to do maintenance on the node. We need a way to manually tell the HA to migrate the VMs to other nodes and not migrate them back until we say so. We want to be able to drain a node, do some work on it, test that everything is working properly, then tell the HA it can migrate VMs back. Draining a device is a pretty standard requirement for any infrastructure and has been raised here many many many times before.
 
Oct 8, 2019
103
22
23
There's one for the general non-HA-thing, but I don't think there's one for enabling the for shutdown already existing mechanism on runtime, so yeah please add an enhancement request and maybe mention this thread to avoid that a colleague directly merges it as duplicate with the non-HA one.
Here's a feature request from about 18 months ago asking for this feature. Perhaps this can be attached to that?

https://forum.proxmox.com/threads/feature-request-maintenance-mode-and-or-drs.93235
 

Hans Otto Lunde

Active Member
Jul 13, 2017
18
2
28
60
Odder, Denmark
www.egmont-hs.dk
please check
- that all nodes in the cluster have their clocks synchronized with NTP and agree on the current time ;)
- the timestamp of /etc/pve/authkey*, it should be within the last 24h
-- if it is further in the past your cluster is likely not quorate! check logs of corosync and pve-cluster and output of pvecm status
-- if it is in the future, one of your nodes likely had its clock set wrong, a simple touch /etc/pve/authkey.* should reset it and fix the problem
Hi
Time-sync is set up and working, but he timestamp of /etc/pve/authkey* was too old!
The test-cluster is not running 24/7, so maybe something went wrong, when I updated it after just firing it up having been shut down for some days.
Anyway this fixed the problem, thanks a lot for your advice. I have now updated the production-cluster and everything looks just right.
Me just being interested, by what mechanism is this time-stamp being updated?
danish greetings - Hans Otto
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,559
1,832
174
South Tyrol/Italy
shop.proxmox.com
Unless something has changed, the current HA does not provide what we need. It migrates workloads when the node goes down and then brings them back when the node comes us. That's fine if the node crashes or something, but not if we want to do maintenance on the node. We need a way to manually tell the HA to migrate the VMs to other nodes and not migrate them back until we say so. We want to be able to drain a node, do some work on it, test that everything is working properly, then tell the HA it can migrate VMs back. Draining a device is a pretty standard requirement for any infrastructure and has been raised here many many many times before.
Fencing and shutdown-policy are two different things, the latter got added sometime in the 6.x series.
This thread already contained answer to this:
I think what they were asking about, at leadt that's what I am asking for, is the possibility to essentially activate the shutdown migrate policy without shutting down.
then
That can indeed be implemented now without all too much trouble (knocks wood) but also independent of the new CRS, which will give just better node targets for balancing out.
And finally:
That would be very useful! Is there already on the radar or even a feature request in Bugzilla or should someone (me) add it?
There's one for the general non-HA-thing, but I don't think there's one for enabling the for shutdown already existing mechanism on runtime, so yeah please add an enhancement request and maybe mention this thread to avoid that a colleague directly merges it as duplicate with the non-HA one.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,559
1,832
174
South Tyrol/Italy
shop.proxmox.com
And I have absolutely NO 3rd-party software on this system. I've installed some Debian packages like deborphan, members, and ethtool, but I can't imagine them interfering with this.
yeah that should not do anything, just mentioned it to be sure, as some users mess around with frontend/backend stuff of PVE and then wonder why things get broken.
This is a cluster test-system that I have for exactly this kind of thing, and it's configured in precisely the same way as the production-system.
I'll be glad to do some digging around to solve this. And it might have something to do with my setup. Especially if nobody else is experiencing this.
Hmm, definitively odd, anything in either the syslog/journal or the browsers' web-dev console during the time of login try?
Maybe open a new thread and mentioned my username @t.lamprecht for checking this out more closely.

EDIT: Oh, only saw your later reply now, seems to fixed already.
 
Last edited:

Hans Otto Lunde

Active Member
Jul 13, 2017
18
2
28
60
Odder, Denmark
www.egmont-hs.dk
yeah that should not do anything, just mentioned it to be sure, as some users mess around with frontend/backend stuff of PVE and then wonder why things get broken.

Hmm, definitively odd, anything in either the syslog/journal or the browsers' web-dev console during the time of login try?
Maybe open a new thread and mentioned my username @t.lamprecht for checking this out more closely.

EDIT: Oh, only saw your later reply now, seems to fixed already.
Yes, it was the timestamp on /etc/pve/authkey.pub that was the cause of the problem.
But I appreciate your answer and help.
Have a nice day!
 
  • Like
Reactions: t.lamprecht
May 11, 2022
19
3
3
Hi Thomas,

Unless something has changed, the current HA does not provide what we need. It migrates workloads when the node goes down and then brings them back when the node comes us. That's fine if the node crashes or something, but not if we want to do maintenance on the node. We need a way to manually tell the HA to migrate the VMs to other nodes and not migrate them back until we say so. We want to be able to drain a node, do some work on it, test that everything is working properly, then tell the HA it can migrate VMs back. Draining a device is a pretty standard requirement for any infrastructure and has been raised here many many many times before.

I'm currently doing what you want. How I do it is:
  1. Enable VM replication and wait for the Sync to be completed
  2. Click "Migrate" on the VM and move it to another host. Because of the upfront replication, the migration takes only a couple of seconds.
Works like a charm (unless you migrate from AMD to Intel... this always fails for me and requires a VM reboot afterwards).

But I get your point - this could be something in the HA-section as a setting.
 
  • Like
Reactions: herzkerl
Oct 8, 2019
103
22
23
Fencing and shutdown-policy are two different things, the latter got added sometime in the 6.x series.
This thread already contained answer to this:

then

And finally:

And none of what you've quoted provides what we've all be asking for, for years. We want a way to manually trigger the bulk migration that the HA does on shutdown. And we want it not to migrate back until we tell it to, even if the node is rebooted. It's been a standard feature in vmware for ever.

Looking at Bugzilla, you even said in 2019 it was on your todo list Tom. So why does this obvious requirements never make it into a release?

https://bugzilla.proxmox.com/show_bug.cgi?id=2181

Thomas Lamprecht 2019-04-16 17:20:53 CEST Yes, something like this is on my TODO, i.e., requesting a maintenance mode for a node (initially only a single node per cluster may be maintained), which then moves VMs which can be live migrated and suspends those which cannot (possible configurable to a certain degree), and locking the VM for new incomming migrations, backpos, storage replications (?) ...? which should be visible in the API/WebUI. It'll need a bit, though..
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,559
1,832
174
South Tyrol/Italy
shop.proxmox.com
And none of what you've quoted provides what we've all be asking for, for years. We want a way to manually trigger the bulk migration that the HA does on shutdown. And we want it not to migrate back until we tell it to, even if the node is rebooted. It's been a standard feature in vmware for ever.
In the quotes I explicitly said that it can be implemented now without too much trouble, and to open a separate bugzilla enhancement request for that w.r.t. to limiting to the HA stack. Also, please don't speak for all with your wants and needs, different setups/users got different ones.
Looking at Bugzilla, you even said in 2019 it was on your todo list Tom. So why does this obvious requirements never make it into a release?
Who is tom? There's a lot of work to be done and was done, e.g., a fully new product developed in 2019 and 2020 (PBS). Just because something is obvious to you, it doesn't mean it must be the most obvious and most important to everyone. As said, it can be done now relatively easily. Really not understanding at what your goal here is with this conversation? I already quoted the relevant parts, if you actually read them and wanted to move that feature forward you could have opened the separate request already for tracking it, instead of posting the same things over and over again here.
 

karypid

New Member
Mar 7, 2021
11
4
3
45
EDIT: adding the initcall_blacklist=sysfb_init kernel parameter fixed things for me! Everything now works as before. I did not have to uncheck the "All Functions" box like others have reported. The give-away if you have two VMs like me is that the one that has the issue will be the one logging kernel messages when the host starts.

--------------------------------------------------------------------------------------------------------------------------

I have a system with two GPUs being passed through to two VMs (one Windows and one Linux).

After upgrading to 7.3 I can no longer get proper display output from the Windows VM:

Code:
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.1: enabling device (0000 -> 0002)
Nov 25 18:47:41 pve pvedaemon[3172]: <root@pam> end task UPID:pve:00001261:00002080:63810DCA:qmstart:102:root@pam: OK
Nov 25 18:47:42 pve chronyd[2820]: Selected source 85.199.214.98 (2.debian.pool.ntp.org)
Nov 25 18:47:55 pve pvedaemon[3172]: <root@pam> update VM 101: -hostpci0 0000:0e:00,pcie=1
Nov 25 18:48:04 pve kernel: usb 7-2.4: reset full-speed USB device number 3 using xhci_hcd
Nov 25 18:48:04 pve kernel: usb 7-2.3: reset full-speed USB device number 4 using xhci_hcd
Nov 25 18:49:04 pve pvedaemon[38124]: start VM 101: UPID:pve:000094EC:0000421E:63810E20:qmstart:101:root@pam:
Nov 25 18:49:04 pve pvedaemon[3174]: <root@pam> starting task UPID:pve:000094EC:0000421E:63810E20:qmstart:101:root@pam:
Nov 25 18:49:04 pve kernel: vfio-pci 0000:0e:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
Nov 25 18:49:04 pve kernel: vfio-pci 0000:0e:00.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=io+mem:owns=none
Nov 25 18:49:04 pve kernel: vfio-pci 0000:0e:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
Nov 25 18:49:04 pve systemd[1]: Started 101.scope.
Nov 25 18:49:04 pve systemd-udevd[38127]: Using default interface naming scheme 'v247'.
Nov 25 18:49:04 pve systemd-udevd[38127]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 18:49:04 pve kernel: device tap101i0 entered promiscuous mode
Nov 25 18:49:04 pve systemd-udevd[38127]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 18:49:04 pve systemd-udevd[38128]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 18:49:04 pve systemd-udevd[38128]: Using default interface naming scheme 'v247'.
Nov 25 18:49:04 pve systemd-udevd[38127]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 18:49:04 pve kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Nov 25 18:49:04 pve kernel: vmbr0: port 2(fwpr101p0) entered disabled state
Nov 25 18:49:04 pve kernel: device fwpr101p0 entered promiscuous mode
Nov 25 18:49:04 pve kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Nov 25 18:49:04 pve kernel: vmbr0: port 2(fwpr101p0) entered forwarding state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Nov 25 18:49:04 pve kernel: device fwln101i0 entered promiscuous mode
Nov 25 18:49:04 pve kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: enabling device (0002 -> 0003)
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: BAR 0: can't reserve [mem 0xffc0000000-0xffcfffffff 64bit pref]
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.1: enabling device (0000 -> 0002)
Nov 25 18:49:07 pve pvedaemon[3174]: <root@pam> end task UPID:pve:000094EC:0000421E:63810E20:qmstart:101:root@pam: OK
Nov 25 18:49:08 pve QEMU[38209]: kvm: vfio_region_write(0000:0e:00.0:region0+0xfc00000, 0x76444d41,4) failed: Device or resource busy
Nov 25 18:49:08 pve kernel: vfio-pci 0000:0e:00.0: BAR 0: can't reserve [mem 0xffc0000000-0xffcfffffff 64bit pref]  

Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
Nov 25 18:47:40 pve kernel: vfio-pci 0000:0b:00.1: enabling device (0000 -> 0002)
Nov 25 18:47:41 pve pvedaemon[3172]: <root@pam> end task UPID:pve:00001261:00002080:63810DCA:qmstart:102:root@pam: OK
Nov 25 18:47:42 pve chronyd[2820]: Selected source 85.199.214.98 (2.debian.pool.ntp.org)
Nov 25 18:47:55 pve pvedaemon[3172]: <root@pam> update VM 101: -hostpci0 0000:0e:00,pcie=1
Nov 25 18:48:04 pve kernel: usb 7-2.4: reset full-speed USB device number 3 using xhci_hcd
Nov 25 18:48:04 pve kernel: usb 7-2.3: reset full-speed USB device number 4 using xhci_hcd
Nov 25 18:49:04 pve pvedaemon[38124]: start VM 101: UPID:pve:000094EC:0000421E:63810E20:qmstart:101:root@pam:
Nov 25 18:49:04 pve pvedaemon[3174]: <root@pam> starting task UPID:pve:000094EC:0000421E:63810E20:qmstart:101:root@pam:
Nov 25 18:49:04 pve kernel: vfio-pci 0000:0e:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
Nov 25 18:49:04 pve kernel: vfio-pci 0000:0e:00.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=io+mem:owns=none
Nov 25 18:49:04 pve kernel: vfio-pci 0000:0e:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
Nov 25 18:49:04 pve systemd[1]: Started 101.scope.
Nov 25 18:49:04 pve systemd-udevd[38127]: Using default interface naming scheme 'v247'.
Nov 25 18:49:04 pve systemd-udevd[38127]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 18:49:04 pve kernel: device tap101i0 entered promiscuous mode
Nov 25 18:49:04 pve systemd-udevd[38127]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 18:49:04 pve systemd-udevd[38128]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 18:49:04 pve systemd-udevd[38128]: Using default interface naming scheme 'v247'.
Nov 25 18:49:04 pve systemd-udevd[38127]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Nov 25 18:49:04 pve kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Nov 25 18:49:04 pve kernel: vmbr0: port 2(fwpr101p0) entered disabled state
Nov 25 18:49:04 pve kernel: device fwpr101p0 entered promiscuous mode
Nov 25 18:49:04 pve kernel: vmbr0: port 2(fwpr101p0) entered blocking state
Nov 25 18:49:04 pve kernel: vmbr0: port 2(fwpr101p0) entered forwarding state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 1(fwln101i0) entered disabled state
Nov 25 18:49:04 pve kernel: device fwln101i0 entered promiscuous mode
Nov 25 18:49:04 pve kernel: fwbr101i0: port 1(fwln101i0) entered blocking state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 1(fwln101i0) entered forwarding state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 2(tap101i0) entered disabled state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 2(tap101i0) entered blocking state
Nov 25 18:49:04 pve kernel: fwbr101i0: port 2(tap101i0) entered forwarding state
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: enabling device (0002 -> 0003)
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: vfio_ecap_init: hiding ecap 0x26@0x410
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: vfio_ecap_init: hiding ecap 0x27@0x440
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.0: BAR 0: can't reserve [mem 0xffc0000000-0xffcfffffff 64bit pref]
Nov 25 18:49:05 pve kernel: vfio-pci 0000:0e:00.1: enabling device (0000 -> 0002)
Nov 25 18:49:07 pve pvedaemon[3174]: <root@pam> end task UPID:pve:000094EC:0000421E:63810E20:qmstart:101:root@pam: OK
Nov 25 18:49:08 pve QEMU[38209]: kvm: vfio_region_write(0000:0e:00.0:region0+0xfc00000, 0x76444d41,4) failed: Device or resource busy
Nov 25 18:49:08 pve kernel: vfio-pci 0000:0e:00.0: BAR 0: can't reserve [mem 0xffc0000000-0xffcfffffff 64bit pref]
....

Is it possible to downgrade to 7.2?
 
Last edited:

s.Oliver

Active Member
Nov 22, 2017
17
0
26
Germany
LXC is provided at version 5.0.0 with this release of PVE.

It seems to have a problem with bind-mounts, because since 28th of July 2022 there's a 5.0.1 version released which fixes this (and much more, here only the big bullet points):
  • Fixed a mount issue resulting in container startup failure when host bind-mounts were used
  • Various meson packaging fixes especially around libcap detection
Would it be possible to be fixed/included really soon?

Thx.
 
Last edited:

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,559
1,832
174
South Tyrol/Italy
shop.proxmox.com
LXC is provided at version 5.0.0 with this release of PVE.

It seems to have a problem with bind-mounts, because since 28th of July 2022 there's a 5.0.1 version released which fixes this (and much more, here only the big bullet points):
  • Fixed a mount issue resulting in container startup failure when host bind-mounts were used
  • Various meson packaging fixes especially around libcap detection
Would it be possible to be fixed/included really soon?

Thx.
Are you actually running into such issues or is this just speculation?

Please note that one of our devs is an upstream maintainer and even authored most of the mentioned fixes,
which got backported to our LXC 5 package before upstream released 5.0.1:
https://git.proxmox.com/?p=lxc.git;a=commitdiff;h=5aef5f9a3c8472b7051f668a6aba5ef2df5778c2
https://git.proxmox.com/?p=lxc.git;a=commitdiff;h=35af27de3b8fb000421d1741817c96dc6cd912ad

If you indeed got problems please open a new thread.
 

Aradalf

New Member
Nov 26, 2022
3
0
1
Hello! First post here :)
One thing I noticed after the update to PVE 7.3 (I am currently on 7.3-3) is that the web UI does not preserve the view I selected. To be clearer, I always use the Server view with the list of VMs and containers open. Before the update to 7.3, the UI automatically selected the server (so I could see the dashboard upon login) and left the list open, whereas after the update the server is not selected and the list is closed. Is this an intended change? I am using Firefox 107 on Linux, if that helps. Also, let me know if this is better discussed in a separate thread. Thanks!
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,559
1,832
174
South Tyrol/Italy
shop.proxmox.com
Also, let me know if this is better discussed in a separate thread.
May need some more back and forth some probably a good idea (mention my username @t.lamprecht in there to ping me about it).

The following would be interesting to know in that new thread: what's the hash fragment (all after the #) of the URL now, and what happens if you delete that and hit enter. Also, any messages in the JS web-dev console (CTRL + SHIFT + J)?
 
See https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#ha_manager_crs

Basically it's the foundation for that. For now, it's limited to the actions where the HA stack had to find a new node already (recovering fenced services, migrate shutdown-policy & HA group changes), and it uses the static-load (configured CPUs and Memory, with memory having much more weight). We're actively working on extending that; but found the current version already a big improvement for the HA and releasing in smaller steps makes always sense.
I tried it! Works pretty nice. Hope this can be improve more and more in near future.
 
  • Like
Reactions: t.lamprecht

oz1cw7yymn

Member
Feb 13, 2019
78
11
13
There's one for the general non-HA-thing, but I don't think there's one for enabling the for shutdown already existing mechanism on runtime, so yeah please add an enhancement request and maybe mention this thread to avoid that a colleague directly merges it as duplicate with the non-HA one.
Sorry - took a bit longer than expected.
https://bugzilla.proxmox.com/show_bug.cgi?id=4371

But it does seem quite close to 2181, at least the last comment 15.
 

s.Oliver

Active Member
Nov 22, 2017
17
0
26
Germany
Are you actually running into such issues or is this just speculation?

Please note that one of our devs is an upstream maintainer and even authored most of the mentioned fixes,
which got backported to our LXC 5 package before upstream released 5.0.1:
https://git.proxmox.com/?p=lxc.git;a=commitdiff;h=5aef5f9a3c8472b7051f668a6aba5ef2df5778c2
https://git.proxmox.com/?p=lxc.git;a=commitdiff;h=35af27de3b8fb000421d1741817c96dc6cd912ad

If you indeed got problems please open a new thread.

before doing any updates/upgrades from version to version, my routine is to check for major version bumps of components – then try to verify, that it not breaks anything (for example saved me from the passthrough problem which was hampering some of which updated from v7.1 to v7.2).

if i read your post correctly, i guess it's not going to be a problem.

but, i don't understand, why you had to backport fixes to v5.0.0 and not instead use the 5.0.1 (it was released 4 months back then). do you have such a long time were you feature freeze different components?

anyway, thx. for your reply/comment and for making PVE so great.
(i do support it by having quite a few subscriptions running)
 
Last edited:

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,559
1,832
174
South Tyrol/Italy
shop.proxmox.com
but, i don't understand, why you had to backport fixes to v5.0.0 and not instead use the 5.0.1 (it was released 4 months back then). do you have such a long time were you feature freeze different components?
As said, we released the fixed 5 version before the 5.0.1 one got tagged upstream, we contributed the fixes after all, and just re-packaging for the sole change of a version number bump that got no other change anyway produces churn, and we avoid that in such cases due to not having any practical effect. The compare to upstream version should not be taken for face value in distribution packages, especially for packages that we have a tight integration and thus fully control anyway (kernel, qemu, LXC, ceph) as those always contain back port for fixes we or our (enterprise) users ran into. Just check our git repos if you want to know about applied fixes.
 

s.Oliver

Active Member
Nov 22, 2017
17
0
26
Germany
As said, we released the fixed 5 version before the 5.0.1 one got tagged upstream, we contributed the fixes after all, and just re-packaging for the sole change of a version number bump that got no other change anyway produces churn, and we avoid that in such cases due to not having any practical effect. The compare to upstream version should not be taken for face value in distribution packages, especially for packages that we have a tight integration and thus fully control anyway (kernel, qemu, LXC, ceph) as those always contain back port for fixes we or our (enterprise) users ran into. Just check our git repos if you want to know about applied fixes.
thx. for this explanation.
it's beyond my head and timeline to dive deep into git and compare/verify stuff.

but i can do a quick comparison of version numbers and release notes to find roadblocks (guess other's will do that too and will have the same problem of not knowing/detecting backported fixes).

maybe you could pin point to those major back ported fixes in the release notes of PVE?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!