Proxmox VE 8.0 released!

What about the "Remove all USB controller from VM" ?

I checked PVE8, still hardcoded:
Code:
-device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2

See this old post:
https://forum.proxmox.com/threads/usb-emulation-no-option-to-disable.122625/
I'd think this is a bug in the VM OS if this causes such increased CPU load like you describe in your linked post from a few months ago.

But anyhow, it might make sense to add an option which allows one to disable USB support completely for a VM.
Feel free to open an enhancement request over at our Bugzilla instance: https://bugzilla.proxmox.com/ so that we can keep track of this.
 
Anyone having any issues with snapshots after upgrading to 8.0.3? Full CEPH, 7 Nodes, getting this on every VM at the moment. Everything else is stable but I do have a host offline for maintenance.

ASK ERROR: VM 314 qmp command 'blockdev-snapshot-internal-sync' failed - Failed to create snapshot 'cbv' on device 'drive-scsi0': Operation not supported
 
Yeah, if you install in BIOS mode then UEFI boot won't be set up and it cannot work to switch later without any interaction. E.g., if UEFI interface isn't available we cannot register a boot entry to the EFIvars.

Why don't you install also while booted in UEFI mode?
@t.lamprecht , i just want to test if the switch work, as I have other servers are using legacy BIOS, if this work I may switch them to UEFI.
 
Hi,
Anyone having any issues with snapshots after upgrading to 8.0.3? Full CEPH, 7 Nodes, getting this on every VM at the moment. Everything else is stable but I do have a host offline for maintenance.

ASK ERROR: VM 314 qmp command 'blockdev-snapshot-internal-sync' failed - Failed to create snapshot 'cbv' on device 'drive-scsi0': Operation not supported
please share a VM configuration qm config <ID> --current and the output of pveversion -v. I assume you don't use krbd in the storage configuration, but was that changed recently?

Do you also see a message in the system log/journal like
Code:
QEMU[55791]: failed to create snap: Operation not supported
?
 
Hi,

please share a VM configuration qm config <ID> --current and the output of pveversion -v. I assume you don't use krbd in the storage configuration, but was that changed recently?

Do you also see a message in the system log/journal like
Code:
QEMU[55791]: failed to create snap: Operation not supported
?

Hi Fiona,

Here is the output(s) as requested

root@PMHOST7:~# qm config 100 --current
agent: 1
bios: ovmf
boot: order=ide0;ide2;net0
cores: 1
efidisk0: VM_Storage_CephFS:vm-100-disk-1,efitype=4m,pre-enrolled-keys=1,size=528K
ide0: VM_Storage_CephFS:vm-100-disk-0,cache=writeback,size=64G
ide2: none,media=cdrom
machine: pc-q35-6.1
memory: 6192
meta: creation-qemu=6.1.1,ctime=1648079459
name: CUS-AD01
net0: virtio=5E:5B:56:01:99:A6,bridge=vmbr0,firewall=1,tag=50
numa: 0
onboot: 1
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=b971621a-9cd0-41c0-a4cd-badb1f9d85ca
sockets: 2
tpmstate0: VM_Storage_CephFS:vm-100-disk-2,size=4M,version=v2.0
vga: virtio
vmgenid: 4a56b214-954c-469b-8ff5-977efc5ef30d

pveversion (note, we are working on getting Kernel 6.2 to boot, been working on locks during USB turnup)

proxmox-ve: 8.0.1 (running kernel: 5.15.108-1-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.3
pve-kernel-5.15: 7.4-4
pve-kernel-6.2.16-4-pve: 6.2.16-5
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.60-2-pve: 5.15.60-2
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph: 17.2.6-pve1+3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.6
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.4
libpve-storage-perl: 8.0.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.1-1
proxmox-backup-file-restore: 3.0.1-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.2
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

we do see the error in syslog as well

2023-08-06T19:26:27.965333-07:00 PMHOST7 pvedaemon[3484137]: <root@pam> snapshot VM 100: sdf
2023-08-06T19:26:27.965694-07:00 PMHOST7 pvedaemon[2471180]: <root@pam> starting task UPID:pMHOST7:003529E9:0837754C:64D05653:qmsnapshot:100:root@pam:
2023-08-06T19:26:34.253864-07:00 PMHOST7 QEMU[278234]: failed to create snap: Operation not supported
2023-08-06T19:26:34.254373-07:00 PMHOST7 pvedaemon[3484137]: VM 100 qmp command failed - VM 100 qmp command 'blockdev-snapshot-internal-sync' failed - Failed to create snapshot 'sdf' on device 'drive-ide0': Operation n>
2023-08-06T19:26:34.256944-07:00 PMHOST7 pvedaemon[3484137]: snapshot create failed: starting cleanup
2023-08-06T19:26:34.591399-07:00 PMHOST7 pvedaemon[3484137]: VM 100 qmp command 'blockdev-snapshot-internal-sync' failed - Failed to create snapshot 'sdf' on device 'drive-ide0': Operation not supported
2023-08-06T19:26:34.598850-07:00 PMHOST7 pvedaemon[2471180]: <root@pam> end task UPID:pMHOST7:003529E9:0837754C:64D05653:qmsnapshot:100:root@pam: VM 100 qmp command 'blockdev-snapshot-internal-sync' failed - Failed to
 
Code:
root@PMHOST7:~# qm config 100 --current
agent: 1
bios: ovmf
boot: order=ide0;ide2;net0
cores: 1
efidisk0: VM_Storage_CephFS:vm-100-disk-1,efitype=4m,pre-enrolled-keys=1,size=528K
ide0: VM_Storage_CephFS:vm-100-disk-0,cache=writeback,size=64G
ide2: none,media=cdrom
machine: pc-q35-6.1
memory: 6192
meta: creation-qemu=6.1.1,ctime=1648079459
name: CUS-AD01
net0: virtio=5E:5B:56:01:99:A6,bridge=vmbr0,firewall=1,tag=50
numa: 0
onboot: 1
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=b971621a-9cd0-41c0-a4cd-badb1f9d85ca
sockets: 2
tpmstate0: VM_Storage_CephFS:vm-100-disk-2,size=4M,version=v2.0
vga: virtio
vmgenid: 4a56b214-954c-469b-8ff5-977efc5ef30d
The storage is called VM_Storage_CephFS, but I guess it's actually an RBD storage? Otherwise, there would need to be a .raw suffix in the disk names.

pveversion (note, we are working on getting Kernel 6.2 to boot, been working on locks during USB turnup)

proxmox-ve: 8.0.1 (running kernel: 5.15.108-1-pve)
Hmm, probably not related, because the snapshot is attempted to be taken via librbd, not krbd.

Code:
2023-08-06T19:26:34.253864-07:00 PMHOST7 QEMU[278234]: failed to create snap: Operation not supported
That means that the error code comes from librbd which is used by QEMU to interact with the image. QEMU issues the request and librbd says that it's not supported, but the question is why?

Can you share the output of rbd -p <pool> info vm-100-disk-0 and rbd -p <pool> snap ls vm-100-disk-0?

Can you take a snapshot with rbd -p <pool> snap create vm-100-disk-0@test-snap? You can remove it again with rbd -p <pool> snap remove vm-100-disk-0@test-snap.
 
The storage is called VM_Storage_CephFS, but I guess it's actually an RBD storage? Otherwise, there would need to be a .raw suffix in the disk names.


Hmm, probably not related, because the snapshot is attempted to be taken via librbd, not krbd.


That means that the error code comes from librbd which is used by QEMU to interact with the image. QEMU issues the request and librbd says that it's not supported, but the question is why?

Can you share the output of rbd -p <pool> info vm-100-disk-0 and rbd -p <pool> snap ls vm-100-disk-0?

Can you take a snapshot with rbd -p <pool> snap create vm-100-disk-0@test-snap? You can remove it again with rbd -p <pool> snap remove vm-100-disk-0@test-snap.

Hi Fiona, yeah its RBD, we just named it that way for convenience.

Odd, getting the following error

root@PMHOST7:~# rbd -p VM_Storage_CephFS info vm-100-disk-0
rbd: error opening pool 'VM_Storage_CephFS': (2) No such file or directory

We did upgrade to Ceph 17 awhile ago, but otherwise its been running fine.
 
root@PMHOST7:~# rbd -p VM_Storage_CephFS info vm-100-disk-0
rbd: error opening pool 'VM_Storage_CephFS': (2) No such file or directory
Does the pool have a different name? Check it in Datacenter->Storage or in the /etc/pve/storage.cfg file.
 
Does the pool have a different name? Check it in Datacenter->Storage or in the /etc/pve/storage.cfg file.
Sorry I had a brain fart and was going with the displayed name and not the pool name. Here are the results

Code:
root@PMHOST7:/etc/pve# rbd -p NVME_CephFS_data info vm-100-disk-0
rbd image 'vm-100-disk-0':
        size 64 GiB in 16384 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 175bce238da0
        block_name_prefix: rbd_data.175bce238da0
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features: 
        flags: 
        create_timestamp: Wed Mar 23 16:51:01 2022
        access_timestamp: Wed Aug  9 02:01:00 2023
        modify_timestamp: Wed Aug  9 02:01:10 2023

Code:
root@PMHOST7:/etc/pve# rbd -p NVME_CephFS_data snap ls vm-100-disk-0
root@PMHOST7:/etc/pve#

Code:
root@PMHOST7:/etc/pve# rbd -p NVME_CephFS_data snap create vm-100-disk-0@test-snap
Creating snap: 10% complete...2023-08-09T02:04:20.644-0700 7f24be4536c0 -1 librbd::SnapshotCreateRequest: failed to allocate snapshot id: (95) Operation not supported
Creating snap: 10% complete...failed.
rbd: failed to create snapshot: (95) Operation not supported
root@PMHOST7:/etc/pve#
 
@JCNED @simmon

EDIT: oh, seems like it's also linked in the other thread @simmon mentioned. I re-found it by searching for failed to allocate snapshot id: (95) Operation not supported

I found: https://www.spinics.net/lists/ceph-users/msg76705.html
Same question for you: was the pool ever attached to a CephFS?

And please note https://www.spinics.net/lists/ceph-users/msg76719.html

Code:
>> This is not and has never been supported.
>
>
> Do you mean 1) using the same erasure coded pool for both rbd and cephfs, or  2) multiple cephfs filesystem using the same erasure coded pool via ceph.dir.layout.pool="ecpool_hdd"?

(1), using the same EC pool for both RBD and CephFS.

Code:
> > So I guess I should use a different ec datapool for rbd and for each of the cephfs filesystems in the future, correct?
>
> Definitely a different EC pool for RBD (i.e. don't mix with CephFS).
> Not sure about the _each_ of the filesystems bit -- Venky or Patrick can
> comment on whether sharing an EC pool between filesystems is OK.

That's true for CephFS too -- different pools for each ceph file
system is recommended.

You can use the `--allow-dangerous-metadata-overlay` option when
creating a ceph file system to reuse metadata and data pools if those
are already in use, however, it's only to be used during emergency
situations.
 
Last edited:
Are there any plans to release an installer that works with the dell hba330? I have to install 7 and upgrade to 8 right now to get a system that boots.
 
Just stopping in to thank the Proxmox Dev team and all contributors for doing great work as always. Both homelab and production clusters at work upgraded to 8 following the provided instructions without any issues. I really appreciate the addition of an enterprise ceph repo.
 
Sure, no problem

  • Proxmox Shell
  • sudo nano /etc/apt/sources.list
  • Add non-free at the end of the first bookworm main contrib line
  • Control + X, save exit
  • sudo apt update
  • sudo apt install dkms
  • sudo apt install r8168-dkms
  • reboot
  • ethtool -i %interfacename% (e.g. ethtool -i enp1s0) to check the loaded driver, should show r8168

In case of a new clean install and update of Proxmox 8.0.2 ISO (without the headers the driver won't build)
  • Proxmox Shell
  • sudo nano /etc/apt/sources.list
  • Add non-free at the end of the first bookworm main contrib line
  • Control + X, save exit
  • sudo apt update
  • sudo apt install dkms
  • sudo apt install pve-headers
  • sudo apt install r8168-dkms
  • reboot
  • ethtool -i %interfacename% (e.g. ethtool -i enp1s0) to check the loaded driver, should show r8168
Revert to the builtin kernel driver
sudo apt purge --auto-remove r8168-dkms
sudo apt purge --auto-remove dkms
 
Last edited:
It appears the 6.2 kernel in Proxmox 8 is much more conservative about keeping threads on the same CPU core/thread and not moving them around than the opt-in 6.1 kernel from Proxmox 7 was. This pretty easily causes individual CPU cores to hit thermal limiting. Is there some way to disable this conservatism and get the kernel to move threads around every few hundred ms to keep individual cores from thermal throttling?
 
In case of a new clean install and update of Proxmox 8.0.2 ISO (without the headers the driver won't build)
  • Proxmox Shell
  • sudo nano /etc/apt/sources.list
  • Add non-free at the end of the first bookworm main contrib line
  • Control + X, save exit
  • sudo apt update
  • sudo apt install dkms
  • sudo apt install pve-headers
  • sudo apt install r8168-dkms
  • reboot
  • ethtool -i %interfacename% (e.g. ethtool -i enp1s0) to check the loaded driver, should show r8168
Revert to the builtin kernel driver
sudo apt purge --auto-remove r8168-dkms
sudo apt purge --auto-remove dkms

I am currently using the r8168-dkms kernel driver. Does anyone know if the driver is rebuilt for the new kernel whenever I do a kernel update? Kernel 6.2.16-8 with its header is waiting in my update pipeline.

Also: Doesn't it make sense to add "non-free" to the security source as well?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!