Proxmox VE 9.0 BETA released!

On setups with modified chrony.conf it also brings a question on upgrade from 8to9. Maybe add this to the list on the wiki, otherwise manually set timeserver gets lost -> errors could happen in corosync or ceph.
We normally focus on those configs that get asked for even if the admin did not make any local changes.

I still added a hint for now, the section is not that crowded yet so it doesn't really hurt. As noted in the hint it'd be best to move your local sources definitions into, e.g., a local.sources file inside /etc/chrony/sources.d/, with that updates to the default config from the debian package won't interfere with them for future updates.
 
Hi,
I was able to migrate to pve9 by following the instructions, but had to disable the repository from the gui

Also, the following URL, which worked on pve8 with pci pass-through for Alderlake igpu, did not work on pve9

https://github.com/qemu/qemu/blob/master/docs/igd-assign.txt
could you please open a separate thread and mention @fiona and @dcsapak there? In the new thread, please provide the output of pveversion -v and qm config <ID> replacing <ID> with the ID of the VM, as well as the exact error message and an excerpt from the system logs/journal from around the time the issue occurs.
 
  • Like
Reactions: SInisterPisces
Good that you were able to fix this! I ran a quick upgrade test with an overprovisioned LVM-thin with and without running the migration script pre-upgrade, and didn't see this issue. Could it be the case that the custom thin_check_options had already been set in /etc/lvm/lvm.conf before the upgrade, and this then got lost during the upgrade (because lvm.conf was overwritten with the config from the package), and this is why you had to set it again post-upgrade?

Hard to tell, I cant remember If I had set that option before. Might be the case, so I overwrote the LVM.conf by confirming it with "yes" on upgrade-question.
 
  • Like
Reactions: fweber
Possible issue: after using nic-naming-tool no ip-communication inside vms and ct via vmbr0 since upgrade to pve9 (test-cluster)

I did made a new post, because it seems to be a lot to post here: https://forum.proxmox.com/threads/pve-9-beta-different-network-errors-since-upgrade.168729/

Edit: this did only happen on a test-cluster with vmbr0 having a bond0, that has additionally one port of the bond0 offline. It did not happen on a single-node I upgraded before. Details in post.
 
Last edited:
  • Like
Reactions: jsterr
Hi all,
after upgrading to Proxmox 9, there seems to be some issue with VM cloning with ZFS Over ISCSI, here the log while trying to clone VM 100 (on the same host [pve1]):


Code:
create full clone of drive efidisk0 (local-zfs:vm-100-disk-0)

create full clone of drive tpmstate0 (local-zfs:vm-100-disk-1)

transferred 0.0 B of 4.0 MiB (0.00%)

transferred 2.0 MiB of 4.0 MiB (50.00%)

transferred 4.0 MiB of 4.0 MiB (100.00%)

transferred 4.0 MiB of 4.0 MiB (100.00%)

create full clone of drive virtio0 (san-zfs:vm-100-disk-0)

TASK ERROR: clone failed: type object 'MappedLUN' has no attribute 'MAX_LUN'

On the SAN side (Debian 13 - ZFS 2.3.2), a new LUN (vm-101-disk-0) is created, but remains in an inconsistent state:

Code:
root@san1 ~ # zfs destroy -f VMs/vm-101-disk-0
cannot destroy 'VMs/vm-101-disk-0': dataset is busy

At this point, even using fuser, lsof, etc., there are no processes using the ZVOL, but it can't be deleted until the SAN is completely rebooted.

The problem doesn't occur if I do a backup and then a restore of the same VM.

even the migration between pve1 and pve2 seems to have some problems
Code:
2025-07-22 13:32:29 use dedicated network address for sending migration traffic (10.10.10.11)
2025-07-22 13:32:29 starting migration of VM 101 to node 'pve2' (10.10.10.11)
2025-07-22 13:32:29 found local disk 'local-zfs:vm-101-disk-0' (attached)
2025-07-22 13:32:29 found generated disk 'local-zfs:vm-101-disk-1' (in current VM config)
2025-07-22 13:32:29 copying local disk images
2025-07-22 13:32:30 full send of rpool/data/vm-101-disk-1@__migration__ estimated size is 45.0K
2025-07-22 13:32:30 total estimated size is 45.0K
2025-07-22 13:32:30 TIME        SENT   SNAPSHOT rpool/data/vm-101-disk-1@__migration__
2025-07-22 13:32:30 successfully imported 'local-zfs:vm-101-disk-1'
2025-07-22 13:32:30 volume 'local-zfs:vm-101-disk-1' is 'local-zfs:vm-101-disk-1' on the target
2025-07-22 13:32:30 starting VM 101 on remote node 'pve2'
2025-07-22 13:32:32 volume 'local-zfs:vm-101-disk-0' is 'local-zfs:vm-101-disk-0' on the target
2025-07-22 13:32:33 start remote tunnel
2025-07-22 13:32:33 ssh tunnel ver 1
2025-07-22 13:32:33 starting storage migration
2025-07-22 13:32:33 efidisk0: start migration to nbd:unix:/run/qemu-server/101_nbd.migrate:exportname=drive-efidisk0
drive mirror is starting for drive-efidisk0
mirror-efidisk0: transferred 0.0 B of 528.0 KiB (0.00%) in 0s
mirror-efidisk0: transferred 528.0 KiB of 528.0 KiB (100.00%) in 1s, ready
all 'mirror' jobs are ready
2025-07-22 13:32:34 switching mirror jobs to actively synced mode
mirror-efidisk0: switching to actively synced mode
mirror-efidisk0: successfully switched to actively synced mode
2025-07-22 13:32:35 starting online/live migration on unix:/run/qemu-server/101.migrate
2025-07-22 13:32:35 set migration capabilities
2025-07-22 13:32:35 migration downtime limit: 100 ms
2025-07-22 13:32:35 migration cachesize: 2.0 GiB
2025-07-22 13:32:35 set migration parameters
2025-07-22 13:32:35 start migrate command to unix:/run/qemu-server/101.migrate
2025-07-22 13:32:36 migration active, transferred 351.4 MiB of 16.0 GiB VM-state, 3.3 GiB/s
2025-07-22 13:32:37 migration active, transferred 912.3 MiB of 16.0 GiB VM-state, 1.1 GiB/s
2025-07-22 13:32:38 migration active, transferred 1.7 GiB of 16.0 GiB VM-state, 1.1 GiB/s
2025-07-22 13:32:39 migration active, transferred 2.6 GiB of 16.0 GiB VM-state, 946.7 MiB/s
2025-07-22 13:32:40 migration active, transferred 3.5 GiB of 16.0 GiB VM-state, 924.1 MiB/s
2025-07-22 13:32:41 migration active, transferred 4.4 GiB of 16.0 GiB VM-state, 888.4 MiB/s
2025-07-22 13:32:42 migration active, transferred 5.3 GiB of 16.0 GiB VM-state, 922.4 MiB/s
2025-07-22 13:32:43 migration active, transferred 6.2 GiB of 16.0 GiB VM-state, 929.7 MiB/s
2025-07-22 13:32:44 migration active, transferred 7.1 GiB of 16.0 GiB VM-state, 926.5 MiB/s
2025-07-22 13:32:45 migration active, transferred 8.0 GiB of 16.0 GiB VM-state, 951.1 MiB/s
2025-07-22 13:32:47 ERROR: online migrate failure - unable to parse migration status 'device' - aborting
2025-07-22 13:32:47 aborting phase 2 - cleanup resources
2025-07-22 13:32:47 migrate_cancel
mirror-efidisk0: Cancelling block job
mirror-efidisk0: Done.
2025-07-22 13:33:20 tunnel still running - terminating now with SIGTERM
2025-07-22 13:33:21 ERROR: migration finished with problems (duration 00:00:52)
TASK ERROR: migration problems

I can't understand what the message "type object 'MappedLUN' has no attribute 'MAX_LUN'" means and how to remove a hanging ZVOL without rebooting the SAN.

Even creating a second VM on pve2 returns the same error:

Code:
TASK ERROR: unable to create VM 200 - type object 'MappedLUN' has no attribute 'MAX_LUN'

Update #1:
If on the SAN (Debian13) I remove targetcli-fb v2.5.3-1.2 and manually compile targetcli-fb v3.0.1 I can create the VMs also on PVE2 (ID 300), but when I try to start it I get the error:

Code:
TASK ERROR: Could not find lu_name for zvol vm-300-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 113.

Obviously on the SAN side, the LUN was created correctly:

Code:
targetcli

targetcli shell version 3.0.1

Copyright 2011-2013 by Datera, Inc and others.

For help on commands, type 'help'.


/> ls

o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 7]
  | | o- VMs-vm-100-disk-0 ......................................... [/dev/zvol/VMs/vm-100-disk-0 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-100-disk-1 ......................................... [/dev/zvol/VMs/vm-100-disk-1 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-100-disk-2 ......................................... [/dev/zvol/VMs/vm-100-disk-2 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-101-disk-0 ........................................... [/dev/zvol/VMs/vm-101-disk-0 (32.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-200-disk-0 ......................................... [/dev/zvol/VMs/vm-200-disk-0 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-200-disk-1 ......................................... [/dev/zvol//VMs/vm-200-disk-1 (32.0GiB) write-thru deactivated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-300-disk-0 ........................................... [/dev/zvol/VMs/vm-300-disk-0 (32.0GiB) write-thru activated]
  | |   o- alua ................................................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.1993-08.org.debian:01:926ae4a3339 ............................................................................. [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 2]
  |     | o- iqn.1993-08.org.debian:01:2cc4e73792e2 ............................................................... [Mapped LUNs: 2]
  |     | | o- mapped_lun0 ..................................................................... [lun0 block/VMs-vm-101-disk-0 (rw)]
  |     | | o- mapped_lun1 ..................................................................... [lun1 block/VMs-vm-300-disk-0 (rw)]
  |     | o- iqn.1993-08.org.debian:01:adaad49a50 ................................................................. [Mapped LUNs: 2]
  |     |   o- mapped_lun0 ..................................................................... [lun0 block/VMs-vm-101-disk-0 (rw)]
  |     |   o- mapped_lun1 ..................................................................... [lun1 block/VMs-vm-300-disk-0 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 2]
  |     | o- lun0 ...................................... [block/VMs-vm-101-disk-0 (/dev/zvol/VMs/vm-101-disk-0) (default_tg_pt_gp)]
  |     | o- lun1 ...................................... [block/VMs-vm-300-disk-0 (/dev/zvol/VMs/vm-300-disk-0) (default_tg_pt_gp)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]
  o- xen-pvscsi ....................................................................................................... [Targets: 0]
/>

Here the view of the pool:

Code:
zfs list

NAME                USED  AVAIL  REFER  MOUNTPOINT
VMs                 272G  4.81T    96K  /VMs
VMs/vm-101-disk-0  34.0G  4.82T  23.1G  -
VMs/vm-300-disk-0  34.0G  4.85T    56K  -

Here the storage.cfg view (being shared it is identical on both nodes):

Code:
cat /etc/pve/storage.cfg

dir: local
path /var/lib/vz
content vztmpl,iso,backup

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

zfs: san-zfs
blocksize 4k
iscsiprovider LIO
pool VMs
portal 10.10.10.12
target iqn.1993-08.org.debian:01:926ae4a3339
content images
lio_tpg tpg1
nodes pve1,pve2
nowritecache 1
sparse 0
zfs-base-path /dev/zvol

Update #2:
I reinstalled everything from scratch, created the first VM (100), then I tried to create a second VM (101) which failed with the usual error "TASK ERROR: unable to create VM 101 - type object 'MappedLUN' has no attribute 'MAX_LUN'" I noticed however that when the VM creation fails, it does not even clean the ZFS/targetcli configuration correctly:

Code:
san1 ~ # targetcli

targetcli shell version 2.1.53
/> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 2]
  | | o- VMs-vm-100-disk-0 ............................................ [/dev/zvol/VMs/vm-100-disk-0 (32.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- VMs-vm-101-disk-0 ............................................ [/dev/zvol/VMs/vm-101-disk-0 (32.0GiB) write-thru activated]
  | |   o- alua ................................................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.1993-08.org.debian:01:926ae4a3339 ............................................................................. [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 2]
  |     | o- iqn.1993-08.org.debian:01:2cc4e73792e2 ............................................................... [Mapped LUNs: 1]
  |     | | o- mapped_lun0 ..................................................................... [lun0 block/VMs-vm-100-disk-0 (rw)]
  |     | o- iqn.1993-08.org.debian:01:adaad49a50 ................................................................. [Mapped LUNs: 1]
  |     |   o- mapped_lun0 ..................................................................... [lun0 block/VMs-vm-100-disk-0 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 2]
  |     | o- lun0 ....................................... [block/VMs-vm-100-disk-0 (/dev/zvol/VMs/vm-100-disk-0) (default_tg_pt_gp)]
  |     | o- lun1 ....................................... [block/VMs-vm-101-disk-0 (/dev/zvol/VMs/vm-101-disk-0) (default_tg_pt_gp)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]
  o- xen-pvscsi ....................................................................................................... [Targets: 0]
/> exit

ZVOL View:
Code:
root@san1 ~ # zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
VMs                68.0G  5.01T    96K  /VMs
VMs/vm-100-disk-0  34.0G  5.04T    56K  -
VMs/vm-101-disk-0 34.0G 5.04T 56K -
 
Last edited:
Hi,
TASK ERROR: clone failed: type object 'MappedLUN' has no attribute 'MAX_LUN'
seems to be a Python error so likely propagated from a tool spawned by our (Perl-based) stack or from the server side. Please share the relevant part of the storage configuration. What tooling do you use on the server side? Are there any errors in the system logs/journal on the client or server side?
 
Hi,

seems to be a Python error so likely propagated from a tool spawned by our (Perl-based) stack or from the server side. Please share the relevant part of the storage configuration. What tooling do you use on the server side? Are there any errors in the system logs/journal on the client or server side?
Thanks for the reply!

I've updated the original post so we have all the information in one place. Since it's a test machine, I can run all the tests (including destructive ones) needed to help narrow down/solve the problem.
 
Last edited:
  • Like
Reactions: SInisterPisces