Proxmox VE 8.0 released!

I would recommend
- create and test backups
- update existing installation to 8.x in-place using the upgrade documentation
 
I would recommend
- create and test backups
- update existing installation to 8.x in-place using the upgrade documentation

Thank you @fabian for the feedback,
Please explain why you suggested the in-place upgrade?


As for backups shall I use a proxmox backup server? e.g. setup the spare laptop for it?
or something external e.g. clonezilla?
 
the in-place upgrade is a lot less hassle than temporarily switching to a two node cluster and back.

you should definitely think about how you want to setup and manage backups in general if you are currently not backing up your guests!
 
the in-place upgrade is a lot less hassle than temporarily switching to a two node cluster and back.

you should definitely think about how you want to setup and manage backups in general if you are currently not backing up your guests!
Thank you for the feedback
 
Hello, thank you very much for your response. :)

As far I can remember, when I installed Proxmox months ago, I followed one of the thousands blogposts where explain how to remove the popup regarding subscription on login. Do you think that this broke my update?


Thank you very much.
No, the script you used wont have broken the upgrade.
Most of the community are not paying subscribers, with some using the scripts to remove the nagging.
If the script was to break the upgrade in some way, then we'd be seeing far more widespread reports about it.
 
No, the script you used wont have broken the upgrade.
Most of the community are not paying subscribers, with some using the scripts to remove the nagging.
If the script was to break the upgrade in some way, then we'd be seeing far more widespread reports about it.
Just FYI, over the years, we had quite a few reports of people running into weird issues and after trying to debug them it turned out to be caused exactly by such third-party tools/scripts messing with the software. That's very frustrating and wastes developer time.
 
Just FYI, over the years, we had quite a few reports of people running into weird issues and after trying to debug them it turned out to be caused exactly by such third-party tools/scripts messing with the software. That's very frustrating and wastes developer time.
Of course, but theres a generalization there that all scripts are the same.

The scripts you refer to many have been for something else entirely.
The one that generates the note (not error) he saw is used by a script that is used by most of the community, and is just an post-apt script to boot.
In the scenario that it was causing issues on that aspect, we'd be seeing a lot more widespread reports of it as its a very common script for this community.

To reframe it, should it be something that gets flagged up during diagnosis requests on here, absolutely, but we should also ensure that we arnt jumping to a conclusion as to cause without evidence.;)
 
Last edited:
Hey @t.lamprecht
Sorry to nerv xD

I don't want to rush anything and i think you're getting burned out probably this days and with all the pve8 release troubles (or not troubles, because everything looks to me really great)

Anyway, just wanted to ask if you could give me a tip regarding to when you're going to release/put a new kernel increment or newer opt-in kernel.
I know this isn't high on the priority list at all and im actually very happy since 6.2.16-3 made my Arc A380 to work at all.
Im just asking because i still have some bugs that are very likely to be fixed in "even" newer releases xD

I have time and don't want to rush, but just wanted to ask, since asking is free xD

Thanks for all your work!
Cheers!
 
Anyway, just wanted to ask if you could give me a tip regarding to when you're going to release/put a new kernel increment or newer opt-in kernel.
For a new major kernel release, like 6.3, then no as written today in another thread.
For a new point release, then yes, but nothing on the immediate horizon - we release a new point release roughly every two to three weeks for the current stable release, sometimes faster if something came up (regression or bigger security issue) and sometimes slower. If the 6.2.16-3 currently works well for one of your setups, and the others are roughly similar in HW, then I don't see anything holding an update back.
 
  • Like
Reactions: Ramalama
For a new major kernel release, like 6.3, then no as written today in another thread.
For a new point release, then yes, but nothing on the immediate horizon - we release a new point release roughly every two to three weeks for the current stable release, sometimes faster if something came up (regression or bigger security issue) and sometimes slower. If the 6.2.16-3 currently works well for one of your setups, and the others are roughly similar in HW, then I don't see anything holding an update back.
A new point release could actually be enough. Thanks for the information!
Waiting till Q3, seems to me okay either, since the DG2 basically works and Plex+Jellyfin aren't that important anyway. Its more a nice to have thing with hw-accel.
What i mean is, the DG2 works actually pretty good, but at some point or after a day, it's very random, jellyfin & plex suddenly switch to CPU rendering (actually plex switches and jellyfin is a bit dumb there, cause it stops working xD). However thats unimportant and unrelated to anything Proxmox related anyway.

But i seen that you opened the option or allowed that we could share an git commit that you could merge/adopt, maybe i find out at some point what exact commit fixes the fw version reading from the gpu and share it to get a merge chance.

However the easiest way to find that commit for me, is putting the card into a testhost and finding out which kernel fixes the issue, and then check the changes.
I could do that theoretically via Passthrough, but im not sure if passthrough changes the behaviour of the card somehow, cause it gets later initialized etc... And my issue doesn't happen everytime, which is very frustrating to debug xD

However, thanks Lamprecht!
Thats enough info for me :-)
 
Slight problem running my VM after upgrading to 8.0.3. Upgrade seemed to go well but my VM won't start with it's original settings. I get the error:

Cannot start VM with passed-through RNG device: '/dev/hwrng' exists, but '/sys/devices/virtual/misc/hw_random/rng_current' is set to 'none'. Ensure that a compatible hardware-RNG is attached to the host

I am running a 5950x. If I change VirtIO RNG to /dev/urandom, it starts OK. Just wondering what is wrong with using /dev/hwrng? It used to work fine.
 
Slight problem running my VM after upgrading to 8.0.3. Upgrade seemed to go well but my VM won't start with it's original settings. I get the error:

Cannot start VM with passed-through RNG device: '/dev/hwrng' exists, but '/sys/devices/virtual/misc/hw_random/rng_current' is set to 'none'. Ensure that a compatible hardware-RNG is attached to the host

I am running a 5950x. If I change VirtIO RNG to /dev/urandom, it starts OK. Just wondering what is wrong with using /dev/hwrng? It used to work fine.
Hey, i just checked on my 5800x:
- But i checked only with i440 not q35, q35 would have required here to create a new VM...

However, i can passthrough /dev/hwrng without any issues, even to multiple VM's at same time.

Code:
cat /sys/devices/virtual/misc/hw_random/rng_current
Shows:
Code:
tpm-rng-0

Im not sure what tpm rng means, but i have an physical LPC TPM 2.0 module on my Board.

Sorry that's not a real confirmation about your bug, but since i have a similar CPU, i thought to post, maybe it helps in some way.
Im using an X570d4i-2t as Mobo if that's interesting.

Cheers

Edit: tested with q35-8.0 no issue either.

Edit2:
I tested further, when doing:
1: cat /dev/random
2: cat /dev/urandom
3: cat /dev/hwrng

1. Works! Outputs a lot of crap that i need to abort.
2. Works! Outputs a lot of crap...
3. Outputs nothing... Means for me that it doesn't work....

Cheers
 
Last edited:
Hi,
Slight problem running my VM after upgrading to 8.0.3. Upgrade seemed to go well but my VM won't start with it's original settings. I get the error:

Cannot start VM with passed-through RNG device: '/dev/hwrng' exists, but '/sys/devices/virtual/misc/hw_random/rng_current' is set to 'none'. Ensure that a compatible hardware-RNG is attached to the host

I am running a 5950x. If I change VirtIO RNG to /dev/urandom, it starts OK. Just wondering what is wrong with using /dev/hwrng? It used to work fine.
sounds like no current RNG device is selected on the host. What does
Code:
cat /sys/devices/virtual/misc/hw_random/rng_current
cat /sys/devices/virtual/misc/hw_random/rng_available
show?
 
Hi Fiona,

This is the output I get from those 2 commands (on 5950x):
Code:
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_current
none
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_available

root@pve1:~#

Note that the last command seems to output a blank line.

I also have a 3900x (not yet upgraded) running 7.4.15. If I run the same commands on that I get:
Code:
root@pve2:~# cat /sys/devices/virtual/misc/hw_random/rng_current
tpm-rng-0
root@pve2:~# cat /sys/devices/virtual/misc/hw_random/rng_available
tpm-rng-0 
root@pve2:~#

Is there anything else I can do to diagnose this problem? Thanks for the help,

Jonathan
 
Hi Fiona,

This is the output I get from those 2 commands (on 5950x):
Code:
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_current
none
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_available

root@pve1:~#

Note that the last command seems to output a blank line.
What kernel were you running before the upgrade? You can try booting an older kernel and compare the output.
 
Please help a new proxmox user.
Can I have a cluster with VE7.4 and VE8?
I have not done any cluster before but I read the available documentation and it seems doable.
Why am asking this? because I thought of the following given that uptime is critical for me.


I have an VE 7.4 that runs a dozen machines and I want to upgrade to 8.

Let create a cluster with a spare laptop I have available (it has 1tb nvme and 1tb external ssd so more space than I really need).
This new laptop will have VE 8 then migrate all the vz/vm to this laptop,
If everything goes well and without issues

Shutdown the old nuc with the 7.4, format it, install the new version 8
(Nuc has a 512gb nvme and a 512gb ssd)

Join again the cluster and migrate back all the machines from the laptop to the nuc.
If everything goes well and without issues remove the laptop
and have a cluster of only one VE
(In the near future I will buy a 2nd nuc and have a piece of mind)

Am I missing something?
Is there a smarter faster more efficient way?

Any help greatly appreciated
The short way. Backup your CTs and VMs with vzdump. Shut them down and do an inplace Upgrade of your NUC
 
What kernel were you running before the upgrade? You can try booting an older kernel and compare the output.
So if I boot with the old kernel (5.15.108-1-pve), the output of the above commands is:
Code:
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_current
tpm-rng-0
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_available
tpm-rng-0
and I can start the VM without problem using rng0: max_bytes=1024,period=1000,source=/dev/hwrng
 
What kernel were you running before the upgrade? You can try booting an older kernel and compare the output.
Wait, can we still boot into 5.15.x after the upgrade? I thought it wouldn't work for dependency reasons
 
So if I boot with the old kernel (5.15.108-1-pve), the output of the above commands is:
Code:
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_current
tpm-rng-0
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_available
tpm-rng-0
and I can start the VM without problem using rng0: max_bytes=1024,period=1000,source=/dev/hwrng
Might be: https://git.kernel.org/pub/scm/linu.../?id=f1324bbc4011ed8aef3f4552210fc429bcd616da
Do you run the latest BIOS?
 
Wait, can we still boot into 5.15.x after the upgrade? I thought it wouldn't work for dependency reasons
As long as you have it installed, yes. But it's not recommended for long-term production use and there won't be any packages for 5.15 for Proxmox VE 8 AFAIK.
 
  • Like
Reactions: insuna

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!