Proxmox VE 9.0 BETA released!

On setups with modified chrony.conf it also brings a question on upgrade from 8to9. Maybe add this to the list on the wiki, otherwise manually set timeserver gets lost -> errors could happen in corosync or ceph.
We normally focus on those configs that get asked for even if the admin did not make any local changes.

I still added a hint for now, the section is not that crowded yet so it doesn't really hurt. As noted in the hint it'd be best to move your local sources definitions into, e.g., a local.sources file inside /etc/chrony/sources.d/, with that updates to the default config from the debian package won't interfere with them for future updates.
 
Hi,
I was able to migrate to pve9 by following the instructions, but had to disable the repository from the gui

Also, the following URL, which worked on pve8 with pci pass-through for Alderlake igpu, did not work on pve9

https://github.com/qemu/qemu/blob/master/docs/igd-assign.txt
could you please open a separate thread and mention @fiona and @dcsapak there? In the new thread, please provide the output of pveversion -v and qm config <ID> replacing <ID> with the ID of the VM, as well as the exact error message and an excerpt from the system logs/journal from around the time the issue occurs.
 
  • Like
Reactions: SInisterPisces
Good that you were able to fix this! I ran a quick upgrade test with an overprovisioned LVM-thin with and without running the migration script pre-upgrade, and didn't see this issue. Could it be the case that the custom thin_check_options had already been set in /etc/lvm/lvm.conf before the upgrade, and this then got lost during the upgrade (because lvm.conf was overwritten with the config from the package), and this is why you had to set it again post-upgrade?

Hard to tell, I cant remember If I had set that option before. Might be the case, so I overwrote the LVM.conf by confirming it with "yes" on upgrade-question.
 
  • Like
Reactions: fweber
Possible issue: after using nic-naming-tool no ip-communication inside vms and ct via vmbr0 since upgrade to pve9 (test-cluster)

I did made a new post, because it seems to be a lot to post here: https://forum.proxmox.com/threads/pve-9-beta-different-network-errors-since-upgrade.168729/

Edit: this did only happen on a test-cluster with vmbr0 having a bond0, that has additionally one port of the bond0 offline. It did not happen on a single-node I upgraded before. Details in post.
 
Last edited:
  • Like
Reactions: jsterr
Hi all,
after upgrading to Proxmox 9, there seems to be some issue with VM cloning with ZFS Over ISCSI, here the log while trying to clone VM 100 (on the same host [pve1]):


Code:
create full clone of drive efidisk0 (local-zfs:vm-100-disk-0)

create full clone of drive tpmstate0 (local-zfs:vm-100-disk-1)

transferred 0.0 B of 4.0 MiB (0.00%)

transferred 2.0 MiB of 4.0 MiB (50.00%)

transferred 4.0 MiB of 4.0 MiB (100.00%)

transferred 4.0 MiB of 4.0 MiB (100.00%)

create full clone of drive virtio0 (san-zfs:vm-100-disk-0)

TASK ERROR: clone failed: type object 'MappedLUN' has no attribute 'MAX_LUN'

On the SAN side (Debian 13 - ZFS 2.3.2), a new LUN (vm-101-disk-0) is created, but remains in an inconsistent state:

Code:
root@san1 ~ # zfs destroy -f VMs/vm-101-disk-0
cannot destroy 'VMs/vm-101-disk-0': dataset is busy

At this point, even using fuser, lsof, etc., there are no processes using the ZVOL, but it can't be deleted until the SAN is completely rebooted.

The problem doesn't occur if I do a backup and then a restore of the same VM.

even the migration between pve1 and pve2 seems to have some problems
Code:
2025-07-22 13:32:29 use dedicated network address for sending migration traffic (10.10.10.11)
2025-07-22 13:32:29 starting migration of VM 101 to node 'pve2' (10.10.10.11)
2025-07-22 13:32:29 found local disk 'local-zfs:vm-101-disk-0' (attached)
2025-07-22 13:32:29 found generated disk 'local-zfs:vm-101-disk-1' (in current VM config)
2025-07-22 13:32:29 copying local disk images
2025-07-22 13:32:30 full send of rpool/data/vm-101-disk-1@__migration__ estimated size is 45.0K
2025-07-22 13:32:30 total estimated size is 45.0K
2025-07-22 13:32:30 TIME        SENT   SNAPSHOT rpool/data/vm-101-disk-1@__migration__
2025-07-22 13:32:30 successfully imported 'local-zfs:vm-101-disk-1'
2025-07-22 13:32:30 volume 'local-zfs:vm-101-disk-1' is 'local-zfs:vm-101-disk-1' on the target
2025-07-22 13:32:30 starting VM 101 on remote node 'pve2'
2025-07-22 13:32:32 volume 'local-zfs:vm-101-disk-0' is 'local-zfs:vm-101-disk-0' on the target
2025-07-22 13:32:33 start remote tunnel
2025-07-22 13:32:33 ssh tunnel ver 1
2025-07-22 13:32:33 starting storage migration
2025-07-22 13:32:33 efidisk0: start migration to nbd:unix:/run/qemu-server/101_nbd.migrate:exportname=drive-efidisk0
drive mirror is starting for drive-efidisk0
mirror-efidisk0: transferred 0.0 B of 528.0 KiB (0.00%) in 0s
mirror-efidisk0: transferred 528.0 KiB of 528.0 KiB (100.00%) in 1s, ready
all 'mirror' jobs are ready
2025-07-22 13:32:34 switching mirror jobs to actively synced mode
mirror-efidisk0: switching to actively synced mode
mirror-efidisk0: successfully switched to actively synced mode
2025-07-22 13:32:35 starting online/live migration on unix:/run/qemu-server/101.migrate
2025-07-22 13:32:35 set migration capabilities
2025-07-22 13:32:35 migration downtime limit: 100 ms
2025-07-22 13:32:35 migration cachesize: 2.0 GiB
2025-07-22 13:32:35 set migration parameters
2025-07-22 13:32:35 start migrate command to unix:/run/qemu-server/101.migrate
2025-07-22 13:32:36 migration active, transferred 351.4 MiB of 16.0 GiB VM-state, 3.3 GiB/s
2025-07-22 13:32:37 migration active, transferred 912.3 MiB of 16.0 GiB VM-state, 1.1 GiB/s
2025-07-22 13:32:38 migration active, transferred 1.7 GiB of 16.0 GiB VM-state, 1.1 GiB/s
2025-07-22 13:32:39 migration active, transferred 2.6 GiB of 16.0 GiB VM-state, 946.7 MiB/s
2025-07-22 13:32:40 migration active, transferred 3.5 GiB of 16.0 GiB VM-state, 924.1 MiB/s
2025-07-22 13:32:41 migration active, transferred 4.4 GiB of 16.0 GiB VM-state, 888.4 MiB/s
2025-07-22 13:32:42 migration active, transferred 5.3 GiB of 16.0 GiB VM-state, 922.4 MiB/s
2025-07-22 13:32:43 migration active, transferred 6.2 GiB of 16.0 GiB VM-state, 929.7 MiB/s
2025-07-22 13:32:44 migration active, transferred 7.1 GiB of 16.0 GiB VM-state, 926.5 MiB/s
2025-07-22 13:32:45 migration active, transferred 8.0 GiB of 16.0 GiB VM-state, 951.1 MiB/s
2025-07-22 13:32:47 ERROR: online migrate failure - unable to parse migration status 'device' - aborting
2025-07-22 13:32:47 aborting phase 2 - cleanup resources
2025-07-22 13:32:47 migrate_cancel
mirror-efidisk0: Cancelling block job
mirror-efidisk0: Done.
2025-07-22 13:33:20 tunnel still running - terminating now with SIGTERM
2025-07-22 13:33:21 ERROR: migration finished with problems (duration 00:00:52)
TASK ERROR: migration problems

I can't understand what the message "type object 'MappedLUN' has no attribute 'MAX_LUN'" means and how to remove a hanging ZVOL without rebooting the SAN.

Even creating a second VM on pve2 returns the same error:

Code:
TASK ERROR: unable to create VM 200 - type object 'MappedLUN' has no attribute 'MAX_LUN'

Update #1:
If on the SAN (Debian13) I remove targetcli-fb v2.5.3-1.2 and manually compile targetcli-fb v3.0.1 I can create the VMs also on PVE2 (ID 300), but when I try to start it I get the error:

Code:
TASK ERROR: Could not find lu_name for zvol vm-300-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 113.

Obviously on the SAN side, the LUN was created correctly:

Code:
targetcli

targetcli shell version 3.0.1

Copyright 2011-2013 by Datera, Inc and others.

For help on commands, type 'help'.


/> ls

o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 7]
  | | o- VMs-vm-100-disk-0 ......................................... [/dev/zvol/VMs/vm-100-disk-0 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-100-disk-1 ......................................... [/dev/zvol/VMs/vm-100-disk-1 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-100-disk-2 ......................................... [/dev/zvol/VMs/vm-100-disk-2 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-101-disk-0 ........................................... [/dev/zvol/VMs/vm-101-disk-0 (32.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-200-disk-0 ......................................... [/dev/zvol/VMs/vm-200-disk-0 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-200-disk-1 ......................................... [/dev/zvol//VMs/vm-200-disk-1 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-300-disk-0 ........................................... [/dev/zvol/VMs/vm-300-disk-0 (32.0GiB) write-thru activated]
  | |   o- alua ................................................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.1993-08.org.debian:01:926ae4a3339 ............................................................................. [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 2]
  |     | o- iqn.1993-08.org.debian:01:2cc4e73792e2 ............................................................... [Mapped LUNs: 2]
  |     | | o- mapped_lun0 ..................................................................... [lun0 block/VMs-vm-101-disk-0 (rw)]
  |     | | o- mapped_lun1 ..................................................................... [lun1 block/VMs-vm-300-disk-0 (rw)]
  |     | o- iqn.1993-08.org.debian:01:adaad49a50 ................................................................. [Mapped LUNs: 2]
  |     |   o- mapped_lun0 ..................................................................... [lun0 block/VMs-vm-101-disk-0 (rw)]
  |     |   o- mapped_lun1 ..................................................................... [lun1 block/VMs-vm-300-disk-0 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 2]
  |     | o- lun0 ...................................... [block/VMs-vm-101-disk-0 (/dev/zvol/VMs/vm-101-disk-0) (default_tg_pt_gp)]
  |     | o- lun1 ...................................... [block/VMs-vm-300-disk-0 (/dev/zvol/VMs/vm-300-disk-0) (default_tg_pt_gp)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]
  o- xen-pvscsi ....................................................................................................... [Targets: 0]
/>

Here the view of the pool:

Code:
zfs list

NAME                USED  AVAIL  REFER  MOUNTPOINT
VMs                 272G  4.81T    96K  /VMs
VMs/vm-101-disk-0  34.0G  4.82T  23.1G  -
VMs/vm-300-disk-0  34.0G  4.85T    56K  -

Here the storage.cfg view (being shared it is identical on both nodes):

Code:
cat /etc/pve/storage.cfg

dir: local
path /var/lib/vz
content vztmpl,iso,backup

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfs: san-zfs
blocksize 4k
iscsiprovider LIO
pool VMs
portal 10.10.10.12
target iqn.1993-08.org.debian:01:926ae4a3339
content images
lio_tpg tpg1
nodes pve1,pve2
nowritecache 1
sparse 0
zfs-base-path /dev/zvol

Update #2:
I reinstalled everything from scratch, created the first VM (100), then I tried to create a second VM (101) which failed with the usual error "TASK ERROR: unable to create VM 101 - type object 'MappedLUN' has no attribute 'MAX_LUN'" I noticed however that when the VM creation fails, it does not even clean the ZFS/targetcli configuration correctly:

Code:
san1 ~ # targetcli

targetcli shell version 2.1.53
/> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 2]
  | | o- VMs-vm-100-disk-0 ............................................ [/dev/zvol/VMs/vm-100-disk-0 (32.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-101-disk-0 ............................................ [/dev/zvol/VMs/vm-101-disk-0 (32.0GiB) write-thru activated]
  | |   o- alua ................................................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.1993-08.org.debian:01:926ae4a3339 ............................................................................. [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 2]
  |     | o- iqn.1993-08.org.debian:01:2cc4e73792e2 ............................................................... [Mapped LUNs: 1]
  |     | | o- mapped_lun0 ..................................................................... [lun0 block/VMs-vm-100-disk-0 (rw)]
  |     | o- iqn.1993-08.org.debian:01:adaad49a50 ................................................................. [Mapped LUNs: 1]
  |     |   o- mapped_lun0 ..................................................................... [lun0 block/VMs-vm-100-disk-0 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 2]
  |     | o- lun0 ....................................... [block/VMs-vm-100-disk-0 (/dev/zvol/VMs/vm-100-disk-0) (default_tg_pt_gp)]
  |     | o- lun1 ....................................... [block/VMs-vm-101-disk-0 (/dev/zvol/VMs/vm-101-disk-0) (default_tg_pt_gp)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]
  o- xen-pvscsi ....................................................................................................... [Targets: 0]
/> exit

ZVOL View:
Code:
root@san1 ~ # zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
VMs                68.0G  5.01T    96K  /VMs
VMs/vm-100-disk-0  34.0G  5.04T    56K  -
VMs/vm-101-disk-0 34.0G 5.04T 56K -
 
Last edited:
Hi,
TASK ERROR: clone failed: type object 'MappedLUN' has no attribute 'MAX_LUN'
seems to be a Python error so likely propagated from a tool spawned by our (Perl-based) stack or from the server side. Please share the relevant part of the storage configuration. What tooling do you use on the server side? Are there any errors in the system logs/journal on the client or server side?
 
Hi,

seems to be a Python error so likely propagated from a tool spawned by our (Perl-based) stack or from the server side. Please share the relevant part of the storage configuration. What tooling do you use on the server side? Are there any errors in the system logs/journal on the client or server side?
Thanks for the reply!

I've updated the original post so we have all the information in one place. Since it's a test machine, I can run all the tests (including destructive ones) needed to help narrow down/solve the problem.
 
Last edited:
  • Like
Reactions: SInisterPisces
I saw that someting changed with VM.Monitor w/ 9.0 but I have yet to find anything that states how to correct it to successfully install 9.0. How do we go about fixing this alert when starting every LXC?

Code:
user config - ignore invalid privilege 'VM.Monitor'
 
Quick question - is it recommended to run a zpool upgrade when moving from 8 -> 9 ?? I know it's been cautioned in the past . . .
 
Quick question - is it recommended to run a zpool upgrade when moving from 8 -> 9 ?? I know it's been cautioned in the past . . .
The issue here is - once you run `zpool upgrade` on your pool you might not be able to import it with an older kernel (ZFS version) - OTOH if you don't need to use one of the new features (from a quick look fast_dedup and raidz_expansion) there's little gain to upgrading the pool IIRC.
 
  • Like
Reactions: rfox
Hi,
Thanks for the reply!

I've updated the original post so we have all the information in one place. Since it's a test machine, I can run all the tests (including destructive ones) needed to help narrow down/solve the problem.
just wanted to let you know that I'm able to reproduce the issue here and will investigate further.

The error message appears when setting up the LUN mapping with the following commands
Code:
[I] root@pve9a3 ~# /usr/bin/targetcli /backstores/block create name=VMs-vm-100-disk-3 dev=/dev/zvol/VMs/vm-100-disk-3
Created block storage object VMs-vm-100-disk-3 using /dev/zvol/VMs/vm-100-disk-3.
[I] root@pve9a3 ~# /usr/bin/targetcli /backstores/block/VMs-vm-100-disk-3 set attribute emulate_tpu=1
Parameter emulate_tpu is now '1'.
[I] root@pve9a3 ~# /usr/bin/targetcli /iscsi/iqn.2003-01.org.linux-iscsi.pve9a3.x8664:sn.009f96a1d8d0/tpg1/luns/ create /backstores/block/VMs-vm-100-disk-3
Created LUN 3.
type object 'MappedLUN' has no attribute 'MAX_LUN'
so apparently it creates the LUN correctly, but still prints an error which we catch and then abort. Still haven't checked where the original error comes from and whether it can be avoided or ignored, those are the next steps.

EDIT: Will check out whether the following can be backported: https://github.com/open-iscsi/targetcli-fb/commit/4424eba4ba9f5c66c0cd8691fd4aad87ee19640f
 
Last edited:
  • Like
Reactions: Danfossi
The update worked perfectly. But TOTP no longer works for my users and I had to deactivate it with “pveum user tfa delete”.

Can anyone reproduce this?
 
The update worked perfectly. But TOTP no longer works for my users and I had to deactivate it with “pveum user tfa delete”.

Can anyone reproduce this?
This could stem from the server time being out of sync. What NTP daemon are you using?
 
This could stem from the server time being out of sync. What NTP daemon are you using?
I use chrony as a daemon.

I see the following errors in the log:

user=root@pam msg=failed to begin webauthn context instantiation: The configuration was invalid
 
Last edited:
Hi,

just wanted to let you know that I'm able to reproduce the issue here and will investigate further.

The error message appears when setting up the LUN mapping with the following commands
Code:
[I] root@pve9a3 ~# /usr/bin/targetcli /backstores/block create name=VMs-vm-100-disk-3 dev=/dev/zvol/VMs/vm-100-disk-3
Created block storage object VMs-vm-100-disk-3 using /dev/zvol/VMs/vm-100-disk-3.
[I] root@pve9a3 ~# /usr/bin/targetcli /backstores/block/VMs-vm-100-disk-3 set attribute emulate_tpu=1
Parameter emulate_tpu is now '1'.
[I] root@pve9a3 ~# /usr/bin/targetcli /iscsi/iqn.2003-01.org.linux-iscsi.pve9a3.x8664:sn.009f96a1d8d0/tpg1/luns/ create /backstores/block/VMs-vm-100-disk-3
Created LUN 3.
type object 'MappedLUN' has no attribute 'MAX_LUN'
so apparently it creates the LUN correctly, but still prints an error which we catch and then abort. Still haven't checked where the original error comes from and whether it can be avoided or ignored, those are the next steps.

EDIT: Will check out whether the following can be backported: https://github.com/open-iscsi/targetcli-fb/commit/4424eba4ba9f5c66c0cd8691fd4aad87ee19640f
Hi Fiona,
this is a great news; at least we're not talking about something esoteric that happens randomly on random machines.

Regarding backporting, at this point I'm not sure if it's worth it or if it's better to adapt Proxmox 9 to handle the new standard. Many major distributions have already adopted a version higher than the one that removes "MappedLUN.MAX_LUN.".

Some examples:
Debian 12 "Bookworm" (stable): 1:2.1.53-1.1
Debian "testing" and "unstable" ("trixie" and "sid"): 1:2.1.53-1.2
Fedora Rawhide: 2.1.58-5.fc43
Fedora 42: 2.1.58-4.fc42
Fedora 41: 2.1.58-3.fc41
openSUSE Tumbleweed: 3.0.1
openSUSE Leap 15.3: 2.1.54
Arch (AUR): 3.0.1-1
etc. etc.

Therefore, using any of these distributions (or higher) as a backend would be impossible.
 
I use chrony as a daemon.

What does chronyc tracking output?

I see the following errors in the log:

user=root@pam msg=failed to begin webauthn context instantiation: The configuration was invalid
Hmm, that would point towards the WebAuthn second factor, not TOTP though.

FWIW, I just tried both WebAuthn based TFA and TOTP based TFA on a PVE 9 system, and both worked here OK, so might be something setup specific.
 
What does chronyc tracking output?


Hmm, that would point towards the WebAuthn second factor, not TOTP though.

FWIW, I just tried both WebAuthn based TFA and TOTP based TFA on a PVE 9 system, and both worked here OK, so might be something setup specific.

I was able to solve the problem.

I found the following old KB article:

https://forum.proxmox.com/threads/unable-to-login-with-webauthn.107873/


I still had an entry (WebAuthn) in the /etc/pve/datacenter.cfg file.

I had probably added it in an earlier version because of my nginx proxy.

After I deleted the line, everything worked perfectly.



Thank you very much for your help.
 
Last edited:
Regarding backporting, at this point I'm not sure if it's worth it or if it's better to adapt Proxmox 9 to handle the new standard. Many major distributions have already adopted a version higher than the one that removes "MappedLUN.MAX_LUN.".
Backporting is actually not even an option, the fix would apply to the server side not to Proxmox VE. So we need to work around the error on our side. I.e. the version of targetcli-fb in Debian 13 throws that error and Debian 13 is/will be very popular. I'll prepare a patch for that.

EDIT: for now, I created a merge request adding the fix for Trixie in Debian Salsa: https://salsa.debian.org/linux-blocks-team/targetcli-fb/-/merge_requests/12
We can hope that this still makes it into the final Debian 13 release, if it does not, we'll go for the workaround on our side.
 
Last edited: