Proxmox VE 8.1 released!

With the old 8K default, in case you use a raidz1/2/3 to store VMs, you always have to increase the volblocksize.
With the new 16K default, using a raidz1/2/3 you only have to increase it once you use more than 3 disks. ;)

The great blog article is gone, but there is still the table of Matt Ahrens breaking down capacity loss based on volblocksize, raidz type and number of disks: https://docs.google.com/spreadsheets/d/1tf4qx1aMJp8Lo_R6gpT689wTjHv6CGVElrPqTA0w_ZY/

Keep in mind that the table is using "block size in sectors" and not "volblocksize". So you have to multiply the "block size in sectors" by 512B for ashift=9, by 4K for ashift=12, by 8K for ashift=13 and so on to get the corresponding volblocksize.
Thanks!

Is that table only for RAIDZ* pools? I'm storing my VM disks in an all SSD pool containing two mirror vdevs. I read pretty early on to avoid using Z1/2/3 for VM storage if I could so as to avoid potential performance complications. And I don't have enough VMs that using a mirror pool feels wasteful.
 
Hi, I'm having issues with the notification system. "package-updates" is enabled with match severity of "info" but I am not getting email notifications. The notification target is SMTP and notifs for backup jobs work.
 
Biggest problem I see currently, is that you can't restore such config. At least last time I checked.
Thanks for your thoughtful reply. Between you and @Dunuin , I'm feeling pretty confident about my setup for the moment. :)

You can restore a configuration like that, but it's annoying. You have to restore it from backup, with the VM or CT off, and then move the virtual disks back to the correct storage before you turn it on.

Any solution I can think of to automate it is inconvenient. Like, a per-VM script to do the restore, move the storage, and then activate the VM.

I don't use live migration yet, but I'm thinking this approach would be incompatible with that.
 
Yes, it got a bit more delayed than we initially expected, I'm afraid. Partially due to holidays and a few bugs here and there, from upstream projects and our side (like some kernel HW issues like aacraid one, so nothing all to big), that some liked to have the fix included in the ISO refresh.

But we now finally made a cut (as there's always something coming up) and after QA reported no regressions, an updated ISO is now finally available to download via our CDN or as torrent, see the respective links over at our website.

Thank you very much. :)
 
Hi,
Hi, I'm having issues with the notification system. "package-updates" is enabled with match severity of "info" but I am not getting email notifications. The notification target is SMTP and notifs for backup jobs work.
do you have the notify setting in your /etc/pve/datacenter.cfg set properly? See man 5 datacenter.cfg for details

EDIT: please don't double post in the future and continue the discussion in the other thread: https://forum.proxmox.com/threads/package-update-notifs-not-working.141182/
 
Last edited:
  • Like
Reactions: konstanz
Couple of issues we have seen after installing 8.1.2 and updated from repositories. These cluster nodes have shared drives setup from iscsi targets. The iscsi setup is using multipath:

root@px01:~# multipath -ll
mpath-hdd0 (360014050212ef8400006000000000000) dm-0 PETASAN,RBD
size=30T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='queue-length 0' prio=50 status=active
|- 14:0:0:0 sdc 8:32 active ready running
|- 15:0:0:0 sdd 8:48 active ready running
|- 17:0:0:0 sdf 8:80 active ready running



lsblk will show different results on nodes of the cluster many will show missing drives:
root@px01:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 372.6G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 371.6G 0 part
sdb 8:16 0 372.6G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 371.6G 0 part
sdc 8:32 0 30T 0 disk
└─mpath-hdd0 252:0 0 30T 0 mpath
├─psan--hhd0--vlm-vm--252--disk--0 252:1 0 1000G 0 lvm
├─psan--hhd0--vlm-vm--252--disk--1 252:2 0 4M 0 lvm
├─psan--hhd0--vlm-vm--318--disk--0 252:3 0 250G 0 lvm
├─psan--hhd0--vlm-vm--241--disk--0 252:4 0 128G 0 lvm
└─psan--hhd0--vlm-vm--242--disk--0 252:5 0 128G 0 lvm
sdd 8:48 0 30T 0 disk
└─mpath-hdd0 252:0 0 30T 0 mpath
├─psan--hhd0--vlm-vm--252--disk--0 252:1 0 1000G 0 lvm
├─psan--hhd0--vlm-vm--252--disk--1 252:2 0 4M 0 lvm
├─psan--hhd0--vlm-vm--318--disk--0 252:3 0 250G 0 lvm
├─psan--hhd0--vlm-vm--241--disk--0 252:4 0 128G 0 lvm
└─psan--hhd0--vlm-vm--242--disk--0 252:5 0 128G 0 lvm
sde 8:64 0 30T 0 disk
└─mpath-hdd0 252:0 0 30T 0 mpath
├─psan--hhd0--vlm-vm--252--disk--0 252:1 0 1000G 0 lvm
├─psan--hhd0--vlm-vm--252--disk--1 252:2 0 4M 0 lvm
├─psan--hhd0--vlm-vm--318--disk--0 252:3 0 250G 0 lvm
├─psan--hhd0--vlm-vm--241--disk--0 252:4 0 128G 0 lvm
└─psan--hhd0--vlm-vm--242--disk--0 252:5 0 128G 0 lvm
sdf 8:80 0 30T 0 disk
└─mpath-hdd0 252:0 0 30T 0 mpath
├─psan--hhd0--vlm-vm--252--disk--0 252:1 0 1000G 0 lvm
├─psan--hhd0--vlm-vm--252--disk--1 252:2 0 4M 0 lvm
├─psan--hhd0--vlm-vm--318--disk--0 252:3 0 250G 0 lvm
├─psan--hhd0--vlm-vm--241--disk--0 252:4 0 128G 0 lvm
└─psan--hhd0--vlm-vm--242--disk--0 252:5 0 128G 0 lvm
sdg 8:96 0 8T 0 disk
└─mpath-ssd0 252:6 0 8T 0 mpath
└─psan--ssd0--lvm-vm--318--disk--0 252:7 0 250G 0 lvm
sdh 8:112 0 8T 0 disk
└─mpath-ssd0 252:6 0 8T 0 mpath
└─psan--ssd0--lvm-vm--318--disk--0 252:7 0 250G 0 lvm
sdi 8:128 0 8T 0 disk
└─mpath-ssd0 252:6 0 8T 0 mpath
└─psan--ssd0--lvm-vm--318--disk--0 252:7 0 250G 0 lvm
sdj 8:144 0 8T 0 disk
└─mpath-ssd0 252:6 0 8T 0 mpath
└─psan--ssd0--lvm-vm--318--disk--0 252:7 0 250G 0 lvm


root@px02:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 372.6G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 371.6G 0 part
sdb 8:16 0 372.6G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 371.6G 0 part
sdc 8:32 0 30T 0 disk
└─mpath-hdd0 252:0 0 30T 0 mpath
├─psan--hhd0--vlm-vm--252--disk--0 252:1 0 1000G 0 lvm
├─psan--hhd0--vlm-vm--318--disk--0 252:3 0 250G 0 lvm
├─psan--hhd0--vlm-vm--241--disk--0 252:4 0 128G 0 lvm
└─psan--hhd0--vlm-vm--242--disk--0 252:5 0 128G 0 lvm
sdd 8:48 0 30T 0 disk
└─mpath-hdd0 252:0 0 30T 0 mpath
├─psan--hhd0--vlm-vm--252--disk--0 252:1 0 1000G 0 lvm
├─psan--hhd0--vlm-vm--318--disk--0 252:3 0 250G 0 lvm
├─psan--hhd0--vlm-vm--241--disk--0 252:4 0 128G 0 lvm
└─psan--hhd0--vlm-vm--242--disk--0 252:5 0 128G 0 lvm
sde 8:64 0 30T 0 disk
└─mpath-hdd0 252:0 0 30T 0 mpath
├─psan--hhd0--vlm-vm--252--disk--0 252:1 0 1000G 0 lvm
├─psan--hhd0--vlm-vm--318--disk--0 252:3 0 250G 0 lvm
├─psan--hhd0--vlm-vm--241--disk--0 252:4 0 128G 0 lvm
└─psan--hhd0--vlm-vm--242--disk--0 252:5 0 128G 0 lvm
sdf 8:80 0 30T 0 disk
└─mpath-hdd0 252:0 0 30T 0 mpath
├─psan--hhd0--vlm-vm--252--disk--0 252:1 0 1000G 0 lvm
├─psan--hhd0--vlm-vm--318--disk--0 252:3 0 250G 0 lvm
├─psan--hhd0--vlm-vm--241--disk--0 252:4 0 128G 0 lvm
└─psan--hhd0--vlm-vm--242--disk--0 252:5 0 128G 0 lvm
sdg 8:96 0 8T 0 disk
└─mpath-ssd0 252:6 0 8T 0 mpath
sdh 8:112 0 8T 0 disk
└─mpath-ssd0 252:6 0 8T 0 mpath
sdi 8:128 0 8T 0 disk
└─mpath-ssd0 252:6 0 8T 0 mpath
sdj 8:144 0 8T 0 disk
└─mpath-ssd0 252:6 0 8T 0 mpath

even though the node will show the data drive in the gui. reboot the nodes with missing data and it is all there.


Secondly, if you delete a VM and tell it to remove the disks in the same step from the gui you have to run:

dmsetup remove -f ???? to clean up the mess.




Lastly, in creating an iscsi storage option some items are not replicated to all cluster nodes. This leave the node that was used for the creation with a valid link but others are missing the entry in /etc/iscsi/nodes/ . Causing all other cluster servers to show the storage with a ? next to it. Copying the missing entries to the other servers of the cluster rectifies the issue
 
Last edited:
Is PVE also affected by this? https://www.phoronix.com/news/OpenZFS-Encrypt-Corrupt

A Phoronix reader wrote in today about an OpenZFS data corruption bug when employing native encryption and making use of send/recv support. Making use of zfs send on an encrypted dataset can cause one or more snapshots to report errors. OpenZFS data corruption issues in this area have apparently been known for years.
 
Depends on how you look at it - We do not support ZFS-encryption - for the reason given in the phoronix post (I only glanced shortly at it) see: https://bugzilla.proxmox.com/show_bug.cgi?id=2350#c25

But we did not explicitly patch encryption out of our ZFS packages - so you can manually setup encryption, and might experience the corruption issues described in the phoronix post.

I hope this explains it!
 
Depends on how you look at it - We do not support ZFS-encryption - for the reason given in the phoronix post (I only glanced shortly at it) see: https://bugzilla.proxmox.com/show_bug.cgi?id=2350#c25

But we did not explicitly patch encryption out of our ZFS packages - so you can manually setup encryption, and might experience the corruption issues described in the phoronix post.

I hope this explains it!
So zpool get all | grep encr says yes on my pve installation, is this something different?
 
So zpool get all | grep encr says yes on my pve installation, is this something different?
You mean output like the following?
Code:
zpool get all | grep encr
rpool   feature@encryption             enabled                        local
Or do you see something else?
 
  • Like
Reactions: jsterr
You mean output like the following?
Code:
zpool get all | grep encr
rpool   feature@encryption             enabled                        local
Or do you see something else?
No thats it. So I am missreading this? Doesnt that mean that local encryption is enabled?
 
No thats it. So I am missreading this? Doesnt that mean that local encryption is enabled?
the feature flag is enabled on the pool - this does not mean that it's actively used - see `man zpool-features`

to see if a particular dataset uses the feature - `zfs get encryption <datasetname>` should work
 
  • Like
Reactions: jsterr
I don't wanna create new post here, but did someone had problems with Terraform provider https://registry.terraform.io/providers/Telmate/proxmox/latest/docs from Telmate?

There is serious debate going on on https://github.com/Telmate/terraform-provider-proxmox/issues related with that provider.
Many people, including me are complaining about cloudinit drive that disappears. I also had setup made here https://github.com/sonic-networks/terraform/blob/master/proxmox/sonic/main.tf based on https://gist.github.com/aw/ce460c2100163c38734a83e09ac0439a that described how to add `cicustom` and this tutorial that uses this attachment from previous link in terraform code https://yetiops.net/posts/proxmox-terraform-cloudinit-saltstack-prometheus/#define-an-instance

Who setup was working on Proxmox7 but on Proxmox8 it fails when creating instance from template. Cloudinit disappears.

Screenshot 2024-02-06 at 00.14.32.png

And when disk scsi0 is resized
cloudinit vanish and only
ide2 chrom appears.

Screenshot 2024-02-06 at 00.15.05.png


Telmet provider on Github is trying to adjust provider but it looks like API for Pve8 changed a bit and that's why there is trouble with current scripts. Why developers always have to mess with the API :/
 
Last edited:
Hi,
I don't wanna create new post here, but did someone had problems with Terraform provider https://registry.terraform.io/providers/Telmate/proxmox/latest/docs from Telmate?

There is serious debate going on on https://github.com/Telmate/terraform-provider-proxmox/issues related with that provider.
Many people, including me are complaining about cloudinit drive that disappears. I also had setup made here https://github.com/sonic-networks/terraform/blob/master/proxmox/sonic/main.tf based on https://gist.github.com/aw/ce460c2100163c38734a83e09ac0439a that described how to add `cicustom` and this tutorial that uses this attachment from previous link in terraform code https://yetiops.net/posts/proxmox-terraform-cloudinit-saltstack-prometheus/#define-an-instance

Who setup was working on Proxmox7 but on Proxmox8 it fails when creating instance from template. Cloudinit disappears.

View attachment 63095

And when disk scsi0 is resized
cloudinit vanish and only
ide2 chrom appears.

View attachment 63097


Telmet provider on Github is trying to adjust provider but it looks like API for Pve8 changed a bit and that's why there is trouble with current scripts. Why developers always have to mess with the API :/
Please share the exact API/CLI commands to reproduce the issue. Resizing the scsi0 disk doesn't make the ide3 drive disappear:
Code:
root@pve8a1 ~ # qm create 111222 --scsi0 lvmthin:2 --ide3 lvmthin:cloudinit
ide3: successfully created disk 'lvmthin:vm-111222-cloudinit,media=cdrom'
scsi0: successfully created disk 'lvmthin:vm-111222-disk-0,size=2G'
root@pve8a1 ~ # qm config 111222                                           
boot: order=scsi0;ide3
ide3: lvmthin:vm-111222-cloudinit,media=cdrom
meta: creation-qemu=8.1.5,ctime=1707896863
scsi0: lvmthin:vm-111222-disk-0,size=2G
smbios1: uuid=ea848807-10da-4715-82d7-9dba3c4f3f7a
vmgenid: 61cb4255-6d22-4b0e-875f-f6bafe544d24
root@pve8a1 ~ # qm disk resize 111222 scsi0 +1M
  Rounding size to boundary between physical extents: 2.00 GiB.
  Size of logical volume lvmthin/vm-111222-disk-0 changed from 2.00 GiB (512 extents) to 2.00 GiB (513 extents).
  Logical volume lvmthin/vm-111222-disk-0 successfully resized.
root@pve8a1 ~ # qm config 111222               
boot: order=scsi0;ide3
ide3: lvmthin:vm-111222-cloudinit,media=cdrom
meta: creation-qemu=8.1.5,ctime=1707896863
scsi0: lvmthin:vm-111222-disk-0,size=2049M
smbios1: uuid=ea848807-10da-4715-82d7-9dba3c4f3f7a
vmgenid: 61cb4255-6d22-4b0e-875f-f6bafe544d24
root@pve8a1 ~ # qm set 111222 --ide2 none,media=cdrom
update VM 111222: -ide2 none,media=cdrom
root@pve8a1 ~ # qm config 111222                     
boot: order=scsi0;ide3;ide2
ide2: none,media=cdrom
ide3: lvmthin:vm-111222-cloudinit,media=cdrom
meta: creation-qemu=8.1.5,ctime=1707896863
scsi0: lvmthin:vm-111222-disk-0,size=2049M
smbios1: uuid=ea848807-10da-4715-82d7-9dba3c4f3f7a
vmgenid: 61cb4255-6d22-4b0e-875f-f6bafe544d24
 
I don't wanna create new post here, but did someone had problems with Terraform provider https://registry.terraform.io/providers/Telmate/proxmox/latest/docs from Telmate?

There is serious debate going on on https://github.com/Telmate/terraform-provider-proxmox/issues related with that provider.
Many people, including me are complaining about cloudinit drive that disappears. I also had setup made here https://github.com/sonic-networks/terraform/blob/master/proxmox/sonic/main.tf based on https://gist.github.com/aw/ce460c2100163c38734a83e09ac0439a that described how to add `cicustom` and this tutorial that uses this attachment from previous link in terraform code https://yetiops.net/posts/proxmox-terraform-cloudinit-saltstack-prometheus/#define-an-instance

Who setup was working on Proxmox7 but on Proxmox8 it fails when creating instance from template. Cloudinit disappears.

View attachment 63095

And when disk scsi0 is resized
cloudinit vanish and only
ide2 chrom appears.

View attachment 63097


Telmet provider on Github is trying to adjust provider but it looks like API for Pve8 changed a bit and that's why there is trouble with current scripts. Why developers always have to mess with the API :/
I will answer you here soon: https://forum.proxmox.com/threads/proxmox-with-terraform-and-ansible.132028/
 
Hey everybody,

is it possible to update a pve-environment vom Bullseye/PVE7 to Bookworm/PVE8 via "Proxmox-Offline-Mirror"?
 
Yes, you just need to mirror the respective Proxmox and Debian repositories.

FWIW, to make the Debian ones quite a bit smaller you can skip a bit of stuff, like:
Code:
skip-packages 'linux-image-*'
skip-sections games
skip-sections debug

Once you mirrored those repos you can follow the how-to without any divergence besides naturally when changing the repos from bullseye to bookworm, as there you need to point it to your offline mirrors.

Testing the whole procedure, e.g., in a test PVE setup inside a VM, would be recommended though.
 
Hi,

Please share the exact API/CLI commands to reproduce the issue. Resizing the scsi0 disk doesn't make the ide3 drive disappear:
Code:
root@pve8a1 ~ # qm create 111222 --scsi0 lvmthin:2 --ide3 lvmthin:cloudinit
ide3: successfully created disk 'lvmthin:vm-111222-cloudinit,media=cdrom'
scsi0: successfully created disk 'lvmthin:vm-111222-disk-0,size=2G'
root@pve8a1 ~ # qm config 111222                                          
boot: order=scsi0;ide3
ide3: lvmthin:vm-111222-cloudinit,media=cdrom
meta: creation-qemu=8.1.5,ctime=1707896863
scsi0: lvmthin:vm-111222-disk-0,size=2G
smbios1: uuid=ea848807-10da-4715-82d7-9dba3c4f3f7a
vmgenid: 61cb4255-6d22-4b0e-875f-f6bafe544d24
root@pve8a1 ~ # qm disk resize 111222 scsi0 +1M
  Rounding size to boundary between physical extents: 2.00 GiB.
  Size of logical volume lvmthin/vm-111222-disk-0 changed from 2.00 GiB (512 extents) to 2.00 GiB (513 extents).
  Logical volume lvmthin/vm-111222-disk-0 successfully resized.
root@pve8a1 ~ # qm config 111222              
boot: order=scsi0;ide3
ide3: lvmthin:vm-111222-cloudinit,media=cdrom
meta: creation-qemu=8.1.5,ctime=1707896863
scsi0: lvmthin:vm-111222-disk-0,size=2049M
smbios1: uuid=ea848807-10da-4715-82d7-9dba3c4f3f7a
vmgenid: 61cb4255-6d22-4b0e-875f-f6bafe544d24
root@pve8a1 ~ # qm set 111222 --ide2 none,media=cdrom
update VM 111222: -ide2 none,media=cdrom
root@pve8a1 ~ # qm config 111222                    
boot: order=scsi0;ide3;ide2
ide2: none,media=cdrom
ide3: lvmthin:vm-111222-cloudinit,media=cdrom
meta: creation-qemu=8.1.5,ctime=1707896863
scsi0: lvmthin:vm-111222-disk-0,size=2049M
smbios1: uuid=ea848807-10da-4715-82d7-9dba3c4f3f7a
vmgenid: 61cb4255-6d22-4b0e-875f-f6bafe544d24
I can share you terraform code with you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!