I would recommend
- create and test backups
- update existing installation to 8.x in-place using the upgrade documentation
Thank you for the feedbackthe in-place upgrade is a lot less hassle than temporarily switching to a two node cluster and back.
you should definitely think about how you want to setup and manage backups in general if you are currently not backing up your guests!
No, the script you used wont have broken the upgrade.Hello, thank you very much for your response.
As far I can remember, when I installed Proxmox months ago, I followed one of the thousands blogposts where explain how to remove the popup regarding subscription on login. Do you think that this broke my update?
Thank you very much.
Just FYI, over the years, we had quite a few reports of people running into weird issues and after trying to debug them it turned out to be caused exactly by such third-party tools/scripts messing with the software. That's very frustrating and wastes developer time.No, the script you used wont have broken the upgrade.
Most of the community are not paying subscribers, with some using the scripts to remove the nagging.
If the script was to break the upgrade in some way, then we'd be seeing far more widespread reports about it.
Of course, but theres a generalization there that all scripts are the same.Just FYI, over the years, we had quite a few reports of people running into weird issues and after trying to debug them it turned out to be caused exactly by such third-party tools/scripts messing with the software. That's very frustrating and wastes developer time.
For a new major kernel release, like 6.3, then no as written today in another thread.Anyway, just wanted to ask if you could give me a tip regarding to when you're going to release/put a new kernel increment or newer opt-in kernel.
A new point release could actually be enough. Thanks for the information!For a new major kernel release, like 6.3, then no as written today in another thread.
For a new point release, then yes, but nothing on the immediate horizon - we release a new point release roughly every two to three weeks for the current stable release, sometimes faster if something came up (regression or bigger security issue) and sometimes slower. If the 6.2.16-3 currently works well for one of your setups, and the others are roughly similar in HW, then I don't see anything holding an update back.
Cannot start VM with passed-through RNG device: '/dev/hwrng' exists, but '/sys/devices/virtual/misc/hw_random/rng_current' is set to 'none'. Ensure that a compatible hardware-RNG is attached to the host
Hey, i just checked on my 5800x:Slight problem running my VM after upgrading to 8.0.3. Upgrade seemed to go well but my VM won't start with it's original settings. I get the error:
Cannot start VM with passed-through RNG device: '/dev/hwrng' exists, but '/sys/devices/virtual/misc/hw_random/rng_current' is set to 'none'. Ensure that a compatible hardware-RNG is attached to the host
I am running a 5950x. If I change VirtIO RNG to /dev/urandom, it starts OK. Just wondering what is wrong with using /dev/hwrng? It used to work fine.
cat /sys/devices/virtual/misc/hw_random/rng_current
tpm-rng-0
sounds like no current RNG device is selected on the host. What doesSlight problem running my VM after upgrading to 8.0.3. Upgrade seemed to go well but my VM won't start with it's original settings. I get the error:
Cannot start VM with passed-through RNG device: '/dev/hwrng' exists, but '/sys/devices/virtual/misc/hw_random/rng_current' is set to 'none'. Ensure that a compatible hardware-RNG is attached to the host
I am running a 5950x. If I change VirtIO RNG to /dev/urandom, it starts OK. Just wondering what is wrong with using /dev/hwrng? It used to work fine.
cat /sys/devices/virtual/misc/hw_random/rng_current
cat /sys/devices/virtual/misc/hw_random/rng_available
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_current
none
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_available
root@pve1:~#
root@pve2:~# cat /sys/devices/virtual/misc/hw_random/rng_current
tpm-rng-0
root@pve2:~# cat /sys/devices/virtual/misc/hw_random/rng_available
tpm-rng-0
root@pve2:~#
What kernel were you running before the upgrade? You can try booting an older kernel and compare the output.Hi Fiona,
This is the output I get from those 2 commands (on 5950x):Code:root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_current none root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_available root@pve1:~#
Note that the last command seems to output a blank line.
The short way. Backup your CTs and VMs with vzdump. Shut them down and do an inplace Upgrade of your NUCPlease help a new proxmox user.
Can I have a cluster with VE7.4 and VE8?
I have not done any cluster before but I read the available documentation and it seems doable.
Why am asking this? because I thought of the following given that uptime is critical for me.
I have an VE 7.4 that runs a dozen machines and I want to upgrade to 8.
Let create a cluster with a spare laptop I have available (it has 1tb nvme and 1tb external ssd so more space than I really need).
This new laptop will have VE 8 then migrate all the vz/vm to this laptop,
If everything goes well and without issues
Shutdown the old nuc with the 7.4, format it, install the new version 8
(Nuc has a 512gb nvme and a 512gb ssd)
Join again the cluster and migrate back all the machines from the laptop to the nuc.
If everything goes well and without issues remove the laptop
and have a cluster of only one VE
(In the near future I will buy a 2nd nuc and have a piece of mind)
Am I missing something?
Is there a smarter faster more efficient way?
Any help greatly appreciated
So if I boot with the old kernel (5.15.108-1-pve), the output of the above commands is:What kernel were you running before the upgrade? You can try booting an older kernel and compare the output.
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_current
tpm-rng-0
root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_available
tpm-rng-0
rng0: max_bytes=1024,period=1000,source=/dev/hwrng
Wait, can we still boot into 5.15.x after the upgrade? I thought it wouldn't work for dependency reasonsWhat kernel were you running before the upgrade? You can try booting an older kernel and compare the output.
Might be: https://git.kernel.org/pub/scm/linu.../?id=f1324bbc4011ed8aef3f4552210fc429bcd616daSo if I boot with the old kernel (5.15.108-1-pve), the output of the above commands is:and I can start the VM without problem usingCode:root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_current tpm-rng-0 root@pve1:~# cat /sys/devices/virtual/misc/hw_random/rng_available tpm-rng-0
rng0: max_bytes=1024,period=1000,source=/dev/hwrng
As long as you have it installed, yes. But it's not recommended for long-term production use and there won't be any packages for 5.15 for Proxmox VE 8 AFAIK.Wait, can we still boot into 5.15.x after the upgrade? I thought it wouldn't work for dependency reasons