Proxmox Virtual Environment 9.0 released!

Hi,
3. Re-enable the storage in Proxmox

Finally, clear the “disabled” flag so Proxmox will use it:
pvesm set local-lvm --disable 0
systemctl restart pvedaemon pveproxy pvestatd
This flips local-lvm back on in /etc/pve/storage.cfg and reloads the storage daemons.
none of these daemons is a storage daemon. You should always use reload-or-restart rather than restart for these services. But it is not necessary to do so for applying a storage configuration change in the first place.
 
  • Like
Reactions: Jedis
Hi,
Code:
xavier@Mac Downloads % dd bs=1M conv=fdatasync if=proxmox-ve_9.0-1.iso of=/dev/disk4
dd: unknown conversion fdatasync
not all Unix implementations for dd support conv=fdatasync. You can try running the command without that and then issuing an explicit sync afterwards.
 
Hi,
I just upgraded to 9.0.3 and its fine except for one troublesome Fedora VM.

Sometimes it refuses to start with "TASK ERROR: timeout waiting on systemd" and sometimes it appears to start, but trying to get into the console I get "error: failed to run vncproxy" and I can't ssh into the VM, none of the containers on the VM start, and it's pinned at 100% memory usage.

The problem VM has a PCIe device passed through but so is another perfectly functional VM on the node, regardless I tried not passing through any devices and that didn't help. I've toggled memory ballooning, changed the display type, restored from a backup, but so far no success.
please open a separate thread pinging @fiona and @dcsapak providing the VM configuration with qm config <ID> replacing <ID> with the actual number. Is there anything in the system logs/journal of the host around the time the issue occurs?
 
Hi,

I'm not aware of changes in this area. The fallback is for the memory consumption for the whole process cgroup associated to the VM on the host, so that can be more than the total assigned memory for the guest:
https://git.proxmox.com/?p=qemu-ser...c094a357bc937ed92708c07a2908289ab1580e3#l2711
@juliokele to follow up here, yes there were some changes in how we gather and calculate the "host view" of the guests memory usage. See this commit which introduced the change: https://git.proxmox.com/?p=qemu-server.git;a=commitdiff;h=b14ae0d9a5ed527b1eb547c9b5c5073142841954
 
  • Like
Reactions: juliokele and fiona
Hi,
Is there any news/fix for this issue? In short, after the update NFS storage mounts don't work anymore, see my previous messages.
please open a new thread and feel free to ping @fiona there, providing the exact commands or configuration you are using for the storage, the exact error messages, as well as excerpts from the system logs/journal. Is network communication with the NFS server fine? Can you list the shares via a CLI command manually using rpcinfo or showmount (depending on the NFS version), see here for how such a command looks like: https://git.proxmox.com/?p=pve-stor...b603dc2e4e4d3b47ce0d29114159c692;hb=HEAD#l183
 
thanks! sounds like your system is good now. could you maybe post the full /var/log/apt/term.log for the hanging upgrade? maybe that gives us some pointers what went wrong there..
term.log attached.

Sequence that hung is the second last on the log.
 

Attachments

Hi,
Excuse me, how can I obtain dhclinisc-dhcp-clientet on the newly installed PVE 9.0? When I configured the network dhcp, it told me “([Errno 2] No such file or directory: ‘/sbin/dhclient’)”. I found an `isc-dhcp-client`, but it didn't exist when I tried to install it. I found that `/sbin/dhcpcd` exists. Running `apt search dhclient` returns dhcpcd-base.
you need isc-dhcp-client:
Code:
[I] root@pve9a1 ~# apt-file search /sbin/dhclient
isc-dhcp-client: /usr/sbin/dhclient       
isc-dhcp-client: /usr/sbin/dhclient-script
isc-dhcp-client-ddns: /usr/sbin/dhclient
 
pveversion -v
command not found

ifreload -avd
error: main exception: 'RawConfigParser' object has no attribute "readfp'

Looks like no more PVE :)

Possible easy to reinstall all
Ok, network fixed by editing source ifupdown2 (replace readfp by read_file)

Found that I didn't change repo source for pve to trixie :(
Changed, updated, upgraded, dist-upgraded. But still no GUI and no VM
 
Last edited:
I can't see any problems. I connected an NFS share at 16:55 as a test.
That's weird. Anyway, I restored a backup to a local disk on one of the nodes, and tried to start it, and got this suprise:


TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS.

Which is weird, as it was enabled in the BIOS and nothing was changed in this BIOS.

FYI, it's a 32GB Intel n100 system, which worked obsolutely fine in the previous version 8.4

So far, I'm regretting the upgrade.
 
Hello,

Is anyone experiencing this issue or bug where, when you try to migrate a VM that is using LVM-thin with iSCSI, it fails, the disk disappears, and I am unable to restore the VM from a backup?

migration error:2025-08-06 08:27:44 use dedicated network address for sending migration traffic () 2025-08-06 08:27:44 starting migration of VM 122 to node 'ITY-G1013-SCAV2' () 2025-08-06 08:27:44 found local disk 'LVM-Thin-Test:vm-122-disk-0' (attached) 2025-08-06 08:27:44 copying local disk images 2025-08-06 08:27:45 WARNING: Device mismatch detected for vgs_pve/lvmthin_data_tmeta which is accessing /dev/sda instead of /dev/mapper/mpathb. 2025-08-06 08:27:45 WARNING: Device mismatch detected for vgs_pve/lvmthin_data_tdata which is accessing /dev/sda instead of /dev/mapper/mpathb. 2025-08-06 08:27:45 volume vgs_pve/vm-122-disk-0 already exists - importing with a different name 2025-08-06 08:27:45 WARNING: Device mismatch detected for vgs_pve/lvmthin_data_tmeta which is accessing /dev/sda instead of /dev/mapper/mpathb. 2025-08-06 08:27:45 WARNING: Device mismatch detected for vgs_pve/lvmthin_data_tdata which is accessing /dev/sda instead of /dev/mapper/mpathb. 2025-08-06 08:27:45 WARNING: Device mismatch detected for vgs_pve/lvmthin_data_tmeta which is accessing /dev/sda instead of /dev/mapper/mpathb. 2025-08-06 08:27:45 WARNING: Device mismatch detected for vgs_pve/lvmthin_data_tdata which is accessing /dev/sda instead of /dev/mapper/mpathb. 2025-08-06 08:27:45 WARNING: Thin pool vgs_pve/lvmthin_data has unexpected transaction id 3, expecting 9. 2025-08-06 08:27:45 WARNING: Device mismatch detected for vgs_pve/lvmthin_data_tmeta which is accessing /dev/sda instead of /dev/mapper/mpathb. 2025-08-06 08:27:45 WARNING: Device mismatch detected for vgs_pve/lvmthin_data_tdata which is accessing /dev/sda instead of /dev/mapper/mpathb. 2025-08-06 08:27:45 Thin pool vgs_pve-lvmthin_data-tpool (252:7) transaction_id is 3, while expected 9. 2025-08-06 08:27:45 Failed to suspend vgs_pve/lvmthin_data with queued messages. 2025-08-06 08:27:45 lvremove 'vgs_pve/vm-122-disk-0' error: Failed to update thin pool vgs_pve/lvmthin_data. 2025-08-06 08:27:45 lvcreate 'vgs_pve/vm-122-disk-1' error: Cannot create new thin volume, free space in thin pool vgs_pve/lvmthin_data reached threshold. 2025-08-06 08:27:45 command 'dd 'if=/dev/vgs_pve/vm-122-disk-0' 'bs=64k' 'status=progress'' failed: got signal 13 2025-08-06 08:27:45 ERROR: storage migration for 'LVM-Thin-Test:vm-122-disk-0' to storage 'LVM-Thin-Test' failed - command 'set -o pipefail && pvesm export LVM-Thin-Test:vm-122-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=ITY-G1013-SCAV2' -o 'UserKnownHostsFile=/etc/pve/nodes/ITY-G1013-SCAV2/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@ -- pvesm import LVM-Thin-Test:vm-122-disk-0 raw+size - -with-snapshots 0 -allow-rename 1' failed: exit code 5 2025-08-06 08:27:45 aborting phase 1 - cleanup resources 2025-08-06 08:27:45 ERROR: migration aborted (duration 00:00:01): storage migration for 'LVM-Thin-Test:vm-122-disk-0' to storage 'LVM-Thin-Test' failed - command 'set -o pipefail && pvesm export LVM-Thin-Test:vm-122-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=ITY-G1013-SCAV2' -o 'UserKnownHostsFile=/etc/pve/nodes/ITY-G1013-SCAV2/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@ -- pvesm import LVM-Thin-Test:vm-122-disk-0 raw+size - -with-snapshots 0 -allow-rename 1' failed: exit code 5 TASK ERROR: migration aborted

restore: restore vma archive: zstd -q -d -c /mnt/pve/NFS/dump/vzdump-qemu-104-2025_07_28-10_15_04.vma.zst | vma extract -v -r /var/tmp/vzdumptmp423970.fifo - /var/tmp/vzdumptmp423970CFG: size: 786 name: qemu-server.confDEV: dev_id=1 size: 540672 devname: drive-efidisk0DEV: dev_id=2 size: 343597383680 devname: drive-sata0DEV: dev_id=3 size: 4194304 devname: drive-tpmstate0-backupCTIME: Mon Jul 28 10:15:05 2025 Rounding up size to full physical extent 4.00 MiB device-mapper: message ioctl on (252:8) failed: Device or resource busy Failed to process message "delete 3".no lock found trying to remove 'create' lockerror before or during data restore, some or all disks were not completely restored. VM 104 state is NOT cleaned up.TASK ERROR: command 'set -o pipefail && zstd -q -d -c /mnt/pve/NFS/dump/vzdump-qemu-104-2025_07_28-10_15_04.vma.zst | vma extract -v -r /var/tmp/vzdumptmp423970.fifo - /var/tmp/vzdumptmp423970' failed: lvcreate 'vgs_pve/vm-104-disk-0' error: Failed to suspend vgs_pve/lvmthin_data with queued messages.


I have tried fix it but I keep getting the same error or a sliently different one and still no luck


I would appreciate any advice or help.


 
Last edited:
Hi,

you need isc-dhcp-client:
Code:
[I] root@pve9a1 ~# apt-file search /sbin/dhclient
isc-dhcp-client: /usr/sbin/dhclient      
isc-dhcp-client: /usr/sbin/dhclient-script
isc-dhcp-client-ddns: /usr/sbin/dhclient
My PVE network environment was not configured properly, which caused the issue. I followed the online tutorial to configure it, and the virtual machine istores was configured with a network, leading me to mistakenly assume that the host machine had a network connection. This was a rookie mistake on my part. Thank you for your help.
 
  • Like
Reactions: fiona
Hi,

please open a new thread and feel free to ping @fiona there, providing the exact commands or configuration you are using for the storage, the exact error messages, as well as excerpts from the system logs/journal. Is network communication with the NFS server fine? Can you list the shares via a CLI command manually using rpcinfo or showmount (depending on the NFS version), see here for how such a command looks like: https://git.proxmox.com/?p=pve-stor...b603dc2e4e4d3b47ce0d29114159c692;hb=HEAD#l183
Thanks just opened a new thread.
 
Updated two test servers to 9.0 with no issues. Updated my main server which we use for internet, email, etc and it booted back up, but complained of an error activating pve/data. Had to run lvconvert --repair pve/data to get it back up and running. Not sure if it had anything to do with it, but I did leave the VMs running on this server during the upgrade. Mainly because we have our router vm on it and it had to be left running, but our other VMs could have been shut down.

A little scary for a few minutes, but so far no issues after the repair and reboot. None of the VMs reported any issues with their volumes during restart.
 
I'm trying to debug it but I'm experiencing a very strange network issue. I have a 2 node cluster, one on 8.4.8 to 9.0.3 prod and the other on latest 8.4.8 prod.

After the 8to9 upgrade was completed, I'm getting some timeouts on the webui between two clustered nodes... about 45 minutes after the upgrade and I've already moved a bunch of VMs back to the node.

After 5 minutes, the issue resolves itself and the two nodes can communicate in the webui and I can start a new migration. (this has happened 2-3 times recently and it always resolved itself).
 
Last edited:
Ok, network fixed by editing source ifupdown2 (replace readfp by read_file)

Found that I didn't change repo source for pve to trixie :(
Changed, updated, upgraded, dist-upgraded. But still no GUI and no VM
I believe I have the same issue. Where did you edit the ifupdown2 file to make the replacement?

Thanks.
 
I believe I have the same issue. Where did you edit the ifupdown2 file to make the replacement?
The main issue here is probably that the sources for the Debian repositories got updated to trixie, but there are still bookworm PVE repositories left. Please make sure that every repository is correctly configured and then dist-upgrade.
 
Would love to do that, but because of this issue I cannot connect to run the dist-upgrade command. Networking is hard down until I can get the interface to show up, and actually connect to the network. I can edit the sources.list but still have no network connectivity. I need to fix the issue long enough to be able to run the dist-upgrade again.
 
Would love to do that, but because of this issue I cannot connect to run the dist-upgrade command. Networking is hard down until I can get the interface to show up, and actually connect to the network. I can edit the sources.list but still have no network connectivity. I need to fix the issue long enough to be able to run the dist-upgrade again.

You could try manually configuring your interface in the meanwhile:

Code:
ip a a <cidr> dev <network_device>
ip link set up <network_device>
ip r a default via <gateway>