Proxmox Virtual Environment 9.0 released!

Hi,

please share the full task log and output of pveversion -v. Is there anything in the system journal around the time the issue happens?
Hi,
full log task:
create full clone of drive (esx4:ha-datacenter/esx4-local/RMS.Ubnt.Srv/RMS.Ubnt.Srv.vmdk)
transferred 0.0 B of 128.0 GiB (0.00%)
transferred 1.3 GiB of 128.0 GiB (1.00%)
transferred 2.6 GiB of 128.0 GiB (2.00%)
transferred 3.8 GiB of 128.0 GiB (3.00%)
transferred 5.1 GiB of 128.0 GiB (4.00%)
transferred 6.4 GiB of 128.0 GiB (5.00%)
transferred 7.7 GiB of 128.0 GiB (6.01%)
transferred 9.0 GiB of 128.0 GiB (7.01%)
qemu-img: error while reading at byte 10301208576: Input/output error
TASK ERROR: unable to create VM 101 - cannot import from 'esx4:ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O raw /run/pve/import/esxi/esx4/mnt/ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk zeroinit:/var/lib/pve/local-btrfs/images/101/vm-101-disk-0/disk.raw' failed: exit code 1

pveversion -v:
root@proxmox-gl-01:~# pveversion -v
proxmox-ve: 9.0.0 (running kernel: 6.14.11-1-pve)
pve-manager: 9.0.6 (running version: 9.0.6/49c767b70aeb6648)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14: 6.14.11-1
proxmox-kernel-6.8.12-14-pve-signed: 6.8.12-14
proxmox-kernel-6.8: 6.8.12-14
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.9
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.6
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-1
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.1
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.10
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-4
pve-ha-manager: 5.0.4
pve-i18n: 3.5.2
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.19
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.4-pve1

and log from journalctl

Sep 10 09:42:40 proxmox-gl-01 esxi-folder-fus[1836]: proxmox-gl-01 esxi-folder-fuse[1836]: error handling request: cached read failed: error reading a body from connection

Caused by:
Connection reset by peer (os error 104)
Sep 10 09:42:40 proxmox-gl-01 pvedaemon[240542]: VM 101 creating disks failed
Sep 10 09:42:40 proxmox-gl-01 pvedaemon[240542]: unable to create VM 101 - cannot import from 'esx4:ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O raw /run/pve/import/esxi/esx4/mnt/ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk zeroinit:/var/lib/pve/local-btrfs/images/101/vm-101-disk-0/disk.raw' failed: exit code 1
Sep 10 09:42:40 proxmox-gl-01 pvedaemon[87462]: <root@pam> end task UPID:proxmox-gl-01:0003AB9E:007973B9:68C12B0A:qmcreate:101:root@pam: unable to create VM 101 - cannot import from 'esx4:ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk' - copy failed: command '/usr/bin/qemu-img convert -p -n -f vmdk -O raw /run/pve/import/esxi/esx4/mnt/ha-datacenter/esx4-local/Ubnt.Srv/Ubnt.Srv.vmdk zeroinit:/var/lib/pve/local-btrfs/images/101/vm-101-disk-0/disk.raw' failed: exit code 1
 
Sep 10 09:42:40 proxmox-gl-01 esxi-folder-fus[1836]: proxmox-gl-01 esxi-folder-fuse[1836]: error handling request: cached read failed: error reading a body from connection
Connection reset by peer (os error 104)
As the initial guess, it seems like there is an issue related to the network communication.

Does it always fail at the same offset, i.e. around 9.0 GiB?
 
Hi,
I have a cluster of 5 nodes with ocfs2 filesystems. Today I tried to upgrade first node to 9.0 but node started without ocfs2 filesystems attached and many lines in logs:

Code:
pvestatd[2832]: unable to activate storage 'lun1' - directory is expected to be a mount point but is not mounted: '/mnt/lun1'

mount -a shows this error:
Code:
Error "mount.ocfs2: Invalid name for a cluster while trying to join the group"

I found that it seems to be my issue reported on Oracle forum here . Solution is to manually restart o2cb service and remount filesystems:

Code:
systemctl restart o2cb.service

I can add this to cron temporary but it it's rather lame solution. I would like to fix this before upgrading the rest of nodes. Please help.

===UPDATE===
Problem solved (enable services to get it working)
 
Last edited:
  • Like
Reactions: mlukasik
As the initial guess, it seems like there is an issue related to the network communication.

Does it always fail at the same offset, i.e. around 9.0 GiB?
Maybe you're right.
It fail in random moment.
I've create test vm with only 20GB HDD and migration was successful.
 
Here is my proxmox 9 reinstall story. I had proxmox 8 and then upgraded it to 9, but virtual machines were not getting passed through correctly ( possibly due to something IOMMU Related.) I am reinstalling 9 per A suggestion from someone on how to fix it. Here is my story.

using a cmp51 and windows vm i created a ventoy thumbstick and then configured it for proxmox storage. i then used download from link feature to download an iso for pve 9 and 8.

BLUF:
I booted from the thumb drive and tried each installer. the graphical installer works for version 8, but not 9. I was able to install version 9 using grub command line installer and ran into issues, but was eventually able to get it to boot.

DETAILS:
When I rebooted into proxmox i got the following error.
error could not insert polyval_clmulni and no device with valid iso found

So, I rebooted the machine again and selected ventoy - grub2 mode - proxmox terminal install

after install and reboot on the blue screen for proxmox. hit c key for command line

GNU GRUB ersion 2.12-9+pmx2
At the grub> prompt:

1. Search for the kernel file (vmlinuz)

search -f /boot/vmlinuz-*-pve

This will print the device (like (hd0,gpt2)) where the kernel lives.

2. Search for the initrd file

search -f /boot/initrd.img-*-pve

3. Set that as root

set root=(hd0,gpt2) # replace with the device you found. for me both returned lvm/pve-root

4. find out your exact version

ls /boot/

5. Boot manually (without rdinit=/vtoy/vtoy) be sure to change the numbers below with your output from ls /boot/

step a.
#these next two lines contain a space not a return. very important

linux /boot/vmlinuz-6.8.12-3-pve root=/dev/mapper/pve-root ro quiet

step b. again, make sure you use the correct version from your output

initrd /boot/initrd.img-6.8.12-3-pve

step c.
boot

Making these changes permanent:

Once inside Proxmox:

Remove the bad rdinit=/vtoy/vtoy from /etc/default/grub.d/installer.cfg

nano /etc/default/grub.d/installer.cfg

Rebuild grub with:

grub-mkconfig -o /boot/grub/grub.cfg

reboot

This worked for me (but not yet with zfs), and I hope it helps someone else
 
Last edited:
  • Like
Reactions: Johannes S
I found that it seems to be my issue reported on Oracle forum here . Solution is to manually restart o2cb service and remount filesystems:

Code:
systemctl restart o2cb.service

I can add this to cron temporary but it it's rather lame solution. I would like to fix this before upgrading the rest of nodes. Please help.

===UPDATE===
Problem solved (enable services to get it working)

I seem to hit the same issue in this issue

I enabled `o2cb.service` already but it didn't help in my case. Did you enable other services as well? thanks for sharing
 
I read this :
Code:
WARN: systemd-boot package installed on legacy-boot system is not necessary, consider removing it
INFO: Check for dkms modules...

I will ran this :
root@pve5:~# proxmox-boot-tool status


Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..


System currently booted with legacy bios


DBC0-B6F8 is configured with: grub (versions: 6.8.12-14-pve, 6.8.12-15-pve)


root@pve5:~#

Code:
apt remove systemd-boot
update-grub
This operation is good or not ?
 
I have one question, did anybody notice a problem with PVE9 and kernel 4.4 VMs? I need to find evidence,but it seems that there is some problem with qemu-guest-agent and 4.4 kernel and pve9 where this machines locks and just gets unresponsive.
 
I finally logged into the mobile interface for VE 9. I'd like to give a massive kudos to the folks working on that. It is a really great improvement over the VE 9 mobile interface. Thanks!
 
  • Like
Reactions: pxmxfan
Is the overall consensus that 9.0 is reasonably solid? is 9.1 coming soon? I normally wouldnt move to a x.0 release but after needing a new chassis for my newest proxmox machine, its an opportunity to do some software updates.
 
Hi,

pve8to9 suggests to remove `systemd-boot` package, but my root is on a ZFS mirror. BIOS is set to "UEFI and Legacy Boot" and apparantly legacy boot is used.

Code:
# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
B7A1-2FB3 is configured with: grub (versions: 6.5.13-6-pve, 6.8.12-11-pve, 6.8.12-13-pve, 6.8.12-8-pve)
B7A3-6204 is configured with: grub (versions: 6.5.13-6-pve, 6.8.12-11-pve, 6.8.12-13-pve, 6.8.12-8-pve)

What's the problem with legacy boot and root on ZFS?
Since no one answered, the problem I am aware of is that grub doesnt support all ZFS flags so you have to be careful which features you enabled on the boot pool.

For the systemd-boot package, I had a quick look for you, the advice is basically if you didnt install it manually for a specific reason it is probably safe to remove.

 
Since no one answered, the problem I am aware of is that grub doesnt support all ZFS flags so you have to be careful which features you enabled on the boot pool.
Note that this was one of the major reasons to use proxmox-boot-tool with a dedicated vFAT partition for GRUB to load the kernel and initrd from not only for EFI (where it's necessary anyway), but also for legacy BIOS boots.
As FAT is a much simpler FS that won't get any new features added, and so it's support in GRUB is very mature.

That's done since the Proxmox VE 6.4-1 ISO (released in April 2021) and there's documentation for how to switch over to that boot method, e.g. if you set up a ZFS on root installation using legacy BIOS with an ISO older than the 6.4 one:
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

systemd-boot basically can always be removed and the pve8to9 upgrade checklist tool should correctly output the rare cases when systemd-boot-efi and systemd-boot-tools really are required.
 
Last edited:
  • Like
Reactions: chrcoluk
I updated one node in our cluster from 8 to 9 last Friday and there was a problem with booting after upgrade. I need to reinstall EFI booting files.

To fix it:
1) boot from PVE install image and use grub option "Rescue boot"
2) mount grub EFI partition
Code:
mount /dev/<device>p2 /boot/efi/
3) reinstall grub packages (may not be necessary)
Code:
apt-get install --reinstall grub-efi-amd64-bin grub-efi-amd64-signed grub-efi-amd64-unsigned grub-efi-amd64
4) reinstall grub into EFI partition
Code:
grub-install /dev/<device>p2

Nothing special, but count with this possibility before upgrade.
 
Olá!

Estou com o seguinte erro ao tentar instalar o PVE 9:
command 'unsquashfs -f-dest/target -i/cdrom/pve-base.squashfs' failed with exit code 1 at /usr/share/perl5/Proxmox/Install.pm line 1050.

Logo após passar pelo Summary

Que devo fazer?
-------------------------------------
Hello!

I'm getting the following error when trying to install PVE 9:
command 'unsquashfs -f-dest/target -i/cdrom/pve-base.squashfs' failed with exit code 1 at /usr/share/perl5/Proxmox/Install.pm line 1050.

Right after going through the Summary

What should I do?
 

Attachments

  • Imagem do WhatsApp de 2025-09-24.jpg
    Imagem do WhatsApp de 2025-09-24.jpg
    172.1 KB · Views: 18
I'm getting the following error when trying to install PVE 9:
command 'unsquashfs -f-dest/target -i/cdrom/pve-base.squashfs' failed with exit code 1 at /usr/share/perl5/Proxmox/Install.pm line 1050.

Right after going through the Summary

What should I do?
This looks like something went wrong with the installation medium.

Verify that the ISO you downloaded is correct, and that it is copied correctly to the install medium: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#installation_prepare_media
 
I've upgraded just fine, but my system would not boot correctly under new kernel 6.14.8-2-pve. Recovery mode showed many errors regarding PCI, so I though that `pci_aspm=off` was missing for the new kernel, but it was there, despite it it would not boot. Kernel 6.8.12-13-pve booted just fine, so I've pinned it.
Resolved! I've added option `libata.force=nolpm` and it booted correctly under 6.14.11-3-pve kernel :)
 
  • Like
Reactions: templar and aaron