Proxmox VE 9.0 BETA released!

On setups with modified chrony.conf it also brings a question on upgrade from 8to9. Maybe add this to the list on the wiki, otherwise manually set timeserver gets lost -> errors could happen in corosync or ceph.
We normally focus on those configs that get asked for even if the admin did not make any local changes.

I still added a hint for now, the section is not that crowded yet so it doesn't really hurt. As noted in the hint it'd be best to move your local sources definitions into, e.g., a local.sources file inside /etc/chrony/sources.d/, with that updates to the default config from the debian package won't interfere with them for future updates.
 
  • Like
Reactions: jsterr
Hi,
I was able to migrate to pve9 by following the instructions, but had to disable the repository from the gui

Also, the following URL, which worked on pve8 with pci pass-through for Alderlake igpu, did not work on pve9

https://github.com/qemu/qemu/blob/master/docs/igd-assign.txt
could you please open a separate thread and mention @fiona and @dcsapak there? In the new thread, please provide the output of pveversion -v and qm config <ID> replacing <ID> with the ID of the VM, as well as the exact error message and an excerpt from the system logs/journal from around the time the issue occurs.
 
  • Like
Reactions: SInisterPisces
Good that you were able to fix this! I ran a quick upgrade test with an overprovisioned LVM-thin with and without running the migration script pre-upgrade, and didn't see this issue. Could it be the case that the custom thin_check_options had already been set in /etc/lvm/lvm.conf before the upgrade, and this then got lost during the upgrade (because lvm.conf was overwritten with the config from the package), and this is why you had to set it again post-upgrade?

Hard to tell, I cant remember If I had set that option before. Might be the case, so I overwrote the LVM.conf by confirming it with "yes" on upgrade-question.
 
  • Like
Reactions: fweber
Possible issue: after using nic-naming-tool no ip-communication inside vms and ct via vmbr0 since upgrade to pve9 (test-cluster)

I did made a new post, because it seems to be a lot to post here: https://forum.proxmox.com/threads/pve-9-beta-different-network-errors-since-upgrade.168729/

Edit: this did only happen on a test-cluster with vmbr0 having a bond0, that has additionally one port of the bond0 offline. It did not happen on a single-node I upgraded before. Details in post.
 
Last edited:
  • Like
Reactions: jsterr