Proxmox Virtual Environment 9.0 released!

I was not expecting to run into someone mentioning an in-production DEC Alpha today.
Awesome. :)
Going way off topic ...

It's not actual Alpha hardware; it's an Alpha emulator. It's used to run a Symbolics Lisp Machine emulator (VLM) that I use to build and test installation kits for the VLM. The Alpha emulator is x86-64 but I'm running it in an Arm Debian VM using Apple's Rosetta 2 support on a Mac mini (M4 Pro).
 
  • Like
Reactions: Johannes S
Hi All

Thank you Proxmox Team for really good job! I have tested 9.0 in on my LAB and as far as now I experienced only one issue:
1. Node was automatically switched to grub-based bootloader instead remained previously installed with ISO systemd-boot:
Code:
root@test-pve-01:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
6658-75FF is configured with: grub (versions: 6.14.8-2-pve)
6658-B0CB is configured with: grub (versions: 6.14.8-2-pve)

I suspect, that I made a mistake updating the system via Aptitude, that's way the problem occured but I am not sure. Could someone confirm behavior using apt-get update -> apt-get-upgrade -> apt-get dist-upgrade?

How do I revert to systemd-boot without causing problems step by step and prevent this in future upgrades?
Code:
root@test-pve-01:~# blkid | grep vfat
/dev/nvme0n1p2: UUID="6658-B0CB" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="7f18c198-aea5-4d7e-a46a-885f6d50b94c"
/dev/nvme1n1p2: UUID="6658-75FF" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="7fc45257-c62f-4f6e-af3b-24e22a1ca23c"
 
Upgraded my Proxmox 8.4 to 9.06 today.
pve8to9 --full - Found three issues


1:
LVM/LVM-thin storage has guest volumes with autoactivation enabled

2:
Systemd-boot meta-package changes the bootloader configuration automatically and should be uninstalled

3:
No Intel-Microcode updates installed

After fixing those , i fought the "apt sources a bit"...
But resolved it wo. too much hassle.

Dist-upgraded , and rebooted ....

System was rebooting wo. any issues.


Thank you Proxmox team :)
 
  • Like
Reactions: UdoB
A new version doesn't return graph images using /api2/json/nodes/{node}/qemu/{vmid}/rrd?timeframe=day&ds=cpu
Code:
"image":null
 
I just updated 3 of my 4 homelab proxmox hosts from 8.4.x to 9.0.6. On all the hosts they had a user that was given Administrator level privileges. One of those permissions was the ability to use the "drive_add" command in the vm monitor. In proxmox 8 this command succeeded with the admin user but now it is stating it is a root only command. Is there any way to resolve this?
 
Hi, as Staff Member can you answer my question about production readiness of the feature "VM snapshots on thick-provisioned LVM storages with snapshots as volume chains"?
AFAIU it is considered a tech preview. It is marked as such in the GUI when you create a new storage. Why do we mark it as tech-preview? Because it is a new and major feature that has the potential for edge-cases that are not yet handled well. By releasing it to the public, we hope to get feedback on the rough edges and edge-cases so we can iron them out. Once we feel confident enough, the tech-preview label will be removed.

I hope that answers your question.
 
AFAIU it is considered a tech preview. It is marked as such in the GUI when you create a new storage. Why do we mark it as tech-preview? Because it is a new and major feature that has the potential for edge-cases that are not yet handled well. By releasing it to the public, we hope to get feedback on the rough edges and edge-cases so we can iron them out. Once we feel confident enough, the tech-preview label will be removed.

I hope that answers your question.
Yes, thanks you.
 
I'm definitly stuck on welcome to GRUB and then -> system BIOS
And nothing helps... AsRock n100m MB

LVM thin ext4 on nvme0n1 (nvme0n1p2 vfat FAT32 E4A4-1A40 /boot/efi )

i have UEFI boot with secure boot disabled.
uninstalled systemd-boot by pve8to9 recommendation

ran pve8to9 -full with 0 red lined
and pbs3to4 also (running on same hw)

upgrade passed withot red lines too..
network nic updated before upgrade.

already tried this:
# Fix UEFI boot problems
echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v
apt install --reinstall grub-efi-amd64

And this:
pve-efiboot-tool init /dev/nvme0n1p2 and pve-efiboot-tool refresh did the job and both systems boot normal again.

Need help really.

for now can boot only with PVE iso - Advanced - Rescue boot


Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
E4A4-1A40 is configured with: grub (versions: 6.14.8-2-pve, 6.8.12-13-pve)[/CODE]

BootCurrent: 000C
Timeout: 1 seconds
BootOrder: 0005,000A,000C,000B,0001,0000,0002
Boot0000* Wind￿￿s Boot Manager VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)57494e444f5753000100000088000000780000004200430044004f0042004a004500430054003d007b00390064006500610038003600320063002d0035006300640064002d0034006500370030002d0061006300630031002d006600330032006200330034003400640034003700390035007d00000061000100000010000000040000007fff0400
dp: 01 04 14 00 e7 75 e2 99 a0 75 37 4b a2 e6 c5 38 5e 6c 00 cb / 7f ff 04 00
data: 57 49 4e 44 4f 57 53 00 01 00 00 00 88 00 00 00 78 00 00 00 42 00 43 00 44 00 4f 00 42 00 4a 00 45 00 43 00 54 00 3d 00 7b 00 39 00 64 00 65 00 61 00 38 00 36 00 32 00 63 00 2d 00 35 00 63 00 64 00 64 00 2d 00 34 00 65 00 37 00 30 00 2d 00 61 00 63 00 63 00 31 00 2d 00 66 00 33 00 32 00 62 00 33 00 34 00 34 00 64 00 34 00 37 00 39 00 35 00 7d 00 00 00 61 00 01 00 00 00 10 00 00 00 04 00 00 00 7f ff 04 00
Boot0001* Linux Boot Manager VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
dp: 01 04 14 00 e7 75 e2 99 a0 75 37 4b a2 e6 c5 38 5e 6c 00 cb / 7f ff 04 00
Boot0002* Linux Boot Manager VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
dp: 01 04 14 00 e7 75 e2 99 a0 75 37 4b a2 e6 c5 38 5e 6c 00 cb / 7f ff 04 00
Boot0005* proxmox HD(2,GPT,13309de0-0119-477d-a7ea-11fdcb2b9f42,0x800,0x200000)/File(\EFI\proxmox\shimx64.efi)
dp: 04 01 2a 00 02 00 00 00 00 08 00 00 00 00 00 00 00 00 20 00 00 00 00 00 e0 9d 30 13 19 01 7d 47 a7 ea 11 fd cb 2b 9f 42 02 02 / 04 04 36 00 5c 00 45 00 46 00 49 00 5c 00 70 00 72 00 6f 00 78 00 6d 00 6f 00 78 00 5c 00 73 00 68 00 69 00 6d 00 78 00 36 00 34 00 2e 00 65 00 66 00 69 00 00 00 / 7f ff 04 00
Boot000A* UEFI OS HD(2,GPT,13309de0-0119-477d-a7ea-11fdcb2b9f42,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)0000424f
dp: 04 01 2a 00 02 00 00 00 00 08 00 00 00 00 00 00 00 00 20 00 00 00 00 00 e0 9d 30 13 19 01 7d 47 a7 ea 11 fd cb 2b 9f 42 02 02 / 04 04 30 00 5c 00 45 00 46 00 49 00 5c 00 42 00 4f 00 4f 00 54 00 5c 00 42 00 4f 00 4f 00 54 00 58 00 36 00 34 00 2e 00 45 00 46 00 49 00 00 00 / 7f ff 04 00
data: 00 00 42 4f
Boot000B UEFI: KingstonDataTraveler SE9PMAP, Partition 1 PciRoot(0x0)/Pci(0x14,0x0)/USB(3,0)/HD(1,MBR,0x5a48244c,0x800,0x1d49800)0000424f
dp: 02 01 0c 00 d0 41 03 0a 00 00 00 00 / 01 01 06 00 00 14 / 03 05 06 00 03 00 / 04 01 2a 00 01 00 00 00 00 08 00 00 00 00 00 00 00 98 d4 01 00 00 00 00 4c 24 48 5a 00 00 00 00 00 00 00 00 00 00 00 00 01 01 / 7f ff 04 00
data: 00 00 42 4f
Boot000C* UEFI: USB DISK 3.0 PMAP, Partition 2 PciRoot(0x0)/Pci(0x14,0x0)/USB(12,0)/USB(2,0)/HD(2,GPT,a8a266fa-939e-4be0-9e57-288c1d8d73ef,0x274,0x4000)0000424f
dp: 02 01 0c 00 d0 41 03 0a 00 00 00 00 / 01 01 06 00 00 14 / 03 05 06 00 0c 00 / 03 05 06 00 02 00 / 04 01 2a 00 02 00 00 00 74 02 00 00 00 00 00 00 00 40 00 00 00 00 00 00 fa 66 a2 a8 9e 93 e0 4b 9e 57 28 8c 1d 8d 73 ef 02 02 / 7f ff 04 00
data: 00 00 42 4f


UPDATED:
seems gfngfn256 workaround helped me finally:
 
Last edited:
  • Like
Reactions: miguelwill
And smth new now.
Using disk passthrough to my Unraid VM

Code:
scsi1: /dev/disk/by-id/ata-WDC_WD100EFAX-68LHPN0_2YK4SE9D,backup=0,discard=on,iothread=1,replicate=0,serial=2YK4SE9D,size=9314G

on PVE 8.4 Unraid saw it as "QEMU_HARDDISK_2YK4SE9D"
and in PVE 9 it now shown as: "0QEMU_QEMU_HARDDISK_drive-scsi1" without serial, and so Unraid disabled it and wont start array anymore...

Had to replace it with mapped PCIe drive - works fine now.
 
Last edited:
It says "technology preview" for the snapshots on thick-provisioned LVM storages with snapshots as volume chains.

Is it risky to use it in production?
When will the stable version be released?
 
After updating to Proxmox VE 9, virtual machines frequently stop with internal errors. About 4 times a day at most.
I had the same problem with Proxmox VE 8, but the problem did not occur for several months after the patch.

Does anyone else have the same problem?

 
Greetings and thanks for the update. It was very stable and painless.

I wanted to ask a question about HA group migration. In my case, we have a multi-node cluster, and at this point, all of them have already been updated. I've checked each one with the "pve8to9" script, and it tells me everything is OK. However, when I try to edit or add affinity rules, it gives me an error indicating that the HA groups have not yet been migrated:

View attachment 89780

Is there a way to update the groups manually?
Or would I have to delete them and create them from scratch?

I also wanted to confirm that nothing goes wrong when deleting an HA group at this point.

Regards

update:
I found the following logs that appear to be related to some old nodes not being completely removed:

Aug 22 16:30:09 r640-84 pve-ha-crm[4183]: start ha group migration...
Aug 22 16:30:09 r640-84 pve-ha-crm[4183]: ha groups migration: node 'r630-79' is in state 'maintenance'
Aug 22 16:30:09 r640-84 pve-ha-crm[4183]: abort ha groups migration: node 'r630-79' is not online
Aug 22 16:30:09 r640-84 pve-ha-crm[4183]: ha groups migration failed
Aug 22 16:30:09 r640-84 pve-ha-crm[4183]: retry ha groups migration in 6 rounds (~ 60 seconds)

update2:
Well, after reviewing the errors in detail, I was able to find a solution: stopping all "pve-ha-crm" services on the nodes, deleting the "/etc/pve/ha/manager_status" file, and restarting the service on all nodes.
After this, in Datacenter Status -> HA, the list of nodes appears clean, without any old or deleted nodes.


I had this same issue after upgrading to pve 9 and i needed the new HA Groups. I removed a bunch of old nodes but for some reason one stuck around in the HA status just like yours.

Thanks for the fix.
 
Hi All

Thank you Proxmox Team for really good job! I have tested 9.0 in on my LAB and as far as now I experienced only one issue:
1. Node was automatically switched to grub-based bootloader instead remained previously installed with ISO systemd-boot:
Code:
root@test-pve-01:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
6658-75FF is configured with: grub (versions: 6.14.8-2-pve)
6658-B0CB is configured with: grub (versions: 6.14.8-2-pve)

I suspect, that I made a mistake updating the system via Aptitude, that's way the problem occured but I am not sure. Could someone confirm behavior using apt-get update -> apt-get-upgrade -> apt-get dist-upgrade?

How do I revert to systemd-boot without causing problems step by step and prevent this in future upgrades?
Code:
root@test-pve-01:~# blkid | grep vfat
/dev/nvme0n1p2: UUID="6658-B0CB" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="7f18c198-aea5-4d7e-a46a-885f6d50b94c"
/dev/nvme1n1p2: UUID="6658-75FF" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="7fc45257-c62f-4f6e-af3b-24e22a1ca23c"

I have tried to revert to systemd-boot via the below commands using guide on wiki:
Code:
proxmox-boot-tool format /dev/nvme0n1p2 --force
proxmox-boot-tool format /dev/nvme1n1p2 --force
proxmox-boot-tool clean
proxmox-boot-tool init /dev/nvme0n1p2
proxmox-boot-tool init /dev/nvme1n1p2
proxmox-boot-tool refresh

it is working but only with disabled secure-boot mode. The enabled secure-boot seems to work only with grub bootloader (I checked twice). Does the replacing during upgrade 8 -> 9 be intended for preserve secure-boot? Does it expected behavior?

I can add, that I have all necessary packages for both:
Code:
i   grub-common                                                                                                    - GRand Unified Bootloader (common files)                                                                                 
i   grub-efi-amd64                                                                                                 - GRand Unified Bootloader, version 2 (EFI-AMD64 version)                                                                 
i A grub-efi-amd64-bin                                                                                             - GRand Unified Bootloader, version 2 (EFI-AMD64 modules)                                                                 
i   grub-efi-amd64-signed                                                                                          - GRand Unified Bootloader, version 2 (amd64 UEFI signed by Debian)                                                       
i A grub-efi-amd64-unsigned                                                                                        - GRand Unified Bootloader, version 2 (EFI-AMD64 images)                                                                 
i   grub-pc-bin                                                                                                    - GRand Unified Bootloader, version 2 (PC/BIOS modules)                                                                   
i   grub2-common                                                                                                   - GRand Unified Bootloader (common files for version 2)                                                                   
i   proxmox-grub                                                                                                   - Empty package to ensure Proxmox Grub packages are installed 
 

i   systemd-boot-efi-amd64-signed                                                                                  - Tools to manage UEFI firmware updates (signed)                                                                         
i   systemd-boot-efi-amd64-signed-template                                                                         - Template for signed systemd-boot-efi package (amd64)                                                                   
i A systemd-boot-tools                                                                                             - simple UEFI boot manager - tools                                                                                       
i   proxmox-default-kernel                                                                                         - Default Proxmox Kernel Image                                                                                           
i A proxmox-kernel-6.14                                                                                            - Latest Proxmox Kernel Image                                                                                             
i A proxmox-kernel-6.14.11-1-pve-signed                                                                            - Proxmox Kernel Image (signed)                                                                                           
i A proxmox-kernel-6.14.8-2-pve-signed                                                                             - Proxmox Kernel Image (signed)                                                                                           
i   proxmox-kernel-helper                                                                                          - Function for various kernel maintenance tasks.
 
Thanks for best effort to release PVE 9.

But ... ZFS still not able to store backup files, we must manually add "is_mountpoint" option to storage.cfg. Will you add this option to the GUI?
You could also add the ZFS dataset as as directory storage inside the GUI. I wouldn't do this though: Your backup shouldn't be on the same place as your actual data but on another server or an external storage disc.
 
You could also add the ZFS dataset as as directory storage inside the GUI. I wouldn't do this though: Your backup shouldn't be on the same place as your actual data but on another server or an external storage disc.
I have not clear about this:
  1. Should I use Dataset or just create a new Folder inside mounted pool?
  2. When we need is_mountpoint option?
NB: The backup feature in this case only for faster migration purpose, of course backup server was used.
 
  1. Should I use Dataset or just create a new Folder inside mounted pool?
  2. When we need is_mountpoint option?
If you set the "is_mountpoint" it should be a dataset. Otherwise the path would not be a mountpoint.
Using a dedicated dataset is what I would do. Having that separation give you some benefits, for example, should you ever want to use the send/recv feature, it will be easy for the backups alone if they are in their own dataset.