Proxmox VE 6.0 released!

Hi,

As I understand, SIMD is used for fletcher4 checksums(maybe I am wrong).

cat /proc/spl/kstat/zfs/fletcher_4_bench | grep fastest
fastest avx512f avx2

You can change the checksum algo to sha256(create a test dataset and see if with sha256 fio will get better results).

Good luck!
Thanks! The checksum algorithm doesn't seem to matter, but on my slower Xeon, when you do

cat /proc/spl/kstat/zfs/vdev_raidz_bench and look at the "scalar" (non-SIMD) row:

gen_p (RAID-Z) is 1.13GB/sec, and gen_pq (RAID-Z2) is 290MB/sec, and gen_pqr (RAID-Z3) is only 132MB/sec. SIMD makes everything 5-7x faster.
 
401 Unauthorized
Ign:10 https://enterprise.proxmox.com/debian/pve buster/pve-enterprise Translation-en
Hit:11 http://download.proxmox.com/debian/ceph-luminous buster InRelease
Hit:12 http://download.proxmox.com/debian/corosync-3 stretch InRelease
Reading package lists...
W: The repository 'https://enterprise.proxmox.com/debian/pve buster Release' does not have a Release file.
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/pve-enterprise/binary-amd64/Packages 401 Unauthorized
E: Some index files failed to download. They have been ignored, or old ones used instead.
TASK ERROR: command 'apt-get update' failed: exit code 100

I'm getting this error while updating proxmox 5.4 to 6
 
@Mikepop, please open up a new thread and provide more information about your setup.
 
@Aniket : you need a subscription for the enterprise repository - see https://pve.proxmox.com/wiki/Package_Repositories
Either purchase one or use the no-subscription repository

401 Unauthorized [IP: 94.136.30.185 443]
Hit:7 http://download.proxmox.com/debian/corosync-3 stretch InRelease
Reading package lists...
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 94.136.30.185 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
TASK ERROR: command 'apt-get update' failed: exit code 100

I used no-subscription repository but the error is same i even upgraded repository
 
Last edited:
you might have added the no-subscription repo, but the line above is a sure indicator that you still have the enterprise repo (or hostname) configured in one of:
* /etc/apt/sources.list
* /etc/apt/sources.list.d/*list
files

Thank You!!
It worked...got the update.
 
The first can be done like (for UEFI):
Code:
echo -n ' elevator=none' >> /etc/kernel/cmdline
pve-efiboot-tool refresh
Did this, after reboot sheduler still mq-deadline
Code:
root@j4205:~# for blk in /sys/block/s*; do echo -n "$blk: "; cat "$blk/queue/scheduler"; done
/sys/block/sda: [mq-deadline] none
/sys/block/sdb: [mq-deadline] none
/sys/block/sdc: [mq-deadline] none
/sys/block/sdd: [mq-deadline] none
 
huh, can you post your
Code:
cat /proc/cmdline
output (to see if the cmdline really git picked up)

But we can go the udev way doing:
Code:
echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/scheduler}="none"' > /etc/udev/rules.d/42-io-sched.rules

This normally gets automatically applied by the udev device manager on startup, to do it now once live you can execute:
Code:
udevadm control --reload-rules && udevadm trigger
 
huh, can you post your
Code:
cat /proc/cmdline
Code:
root@j4205:~# cat /proc/cmdline
initrd=\EFI\proxmox\5.0.15-1-pve\initrd.img-5.0.15-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs elevator=none

But we can go the udev way doing:
Code:
echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/scheduler}="none"' > /etc/udev/rules.d/42-io-sched.rules
This normally gets automatically applied by the udev device manager on startup, to do it now once live you can execute:
Code:
udevadm control --reload-rules && udevadm trigger
This did tha trick. Should i remove "elevator=none" from "/etc/kernel/cmdline"? Then pve-efiboot-tool refresh
 
Should i remove "elevator=none" from "/etc/kernel/cmdline"? Then pve-efiboot-tool refresh

Yes, in my opinion it would be probably better to do this. So only one way is used to set the IO scheduler, else if the meaning changes (or multiqueue works again) it could have strange side effects, easy to forget that it was done :)
 
Just in terms of this - for Production clusters running 5.4 and trying to ensure no downtime, do you have a process that you may have tested in order to add a 6.0 host to a 5.4 cluster (with ZFS pools), and 'live-migrate' VMs from a 5.4 host to a 6.0 host within that cluster?

Or is there no way to do this safely except for a backup/ZFS send/shutdown VM/final ZFS incremental send to new host/start VM?

There's live migration of VMs with local disks. In 5.4 it's only available through the CLI, in 6.0 it'd now also be exposed through the Web GUI.
See:
Code:
qm help migrate

A use of this would look like
Code:
qm migrate <VMID> <TARGETNODE> --online -with-local-disks -targetstorage <TARGETSTORAGE>

"targetstorage" is optional, it'll default to the one of the source, if available.

With this you should be able to live migrate with local disks, but I would still test it (you can test it from 5.4 -> 5.4 as a start, as in the backend not much changed regarding this). Create a test VM with similar properties than your real production ones and just try it out.
 
this isn't dependent on the hosts being members of the same cluster? i.e. we can do this between any two hosts we designate, whether they are in the same cluster or not?

No, only works in cluster, I'm afraid.
 
testing proxmox,
unfortunately 6.0-1 (new install) stucks on qm importdisk ( or move from web gui) to lvm-thin, import is very slow or kills the server...
on 5.4 qm importdisk works normal
 
@Aniket : you need a subscription for the enterprise repository - see https://pve.proxmox.com/wiki/Package_Repositories
Either purchase one or use the no-subscription repository

On a test system - we do not have a subscription and have a pve.list file with
Code:
deb http://download.proxmox.com/debian/pve buster pve-no-subscription

during the upgrade this file was created with these contents
Code:
cat pve-enterprise.list.dpkg-dist

deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise

so we just comment out that line.


you are correct that one should use the correct sources list from pve.

however even with the correct list in place a wrong list gets created.

it is just a minor thing to watch out for and fix.

PS:
thank you for the support!
 
I mean is it really down and out, i.e., gone forever? Then I'd just remove it before the upgrade.
the ceph upgrade wen't quite smooth through, altough i haven't rebooted the nodes so far and atm i'm getting the bluefs spillover messages like mentioned in another thread. i have to look deeper into this problem if it has something to do with my "low spec" ceph cluster or something else
Nevertheless thanks for the instruction and help
 
Hello,

I fresh installed proxmox-ve_6.0-1 with ZFS on root as a RAID1 (two SSDs). After installation, upon boot, I'm met with a "no bootable device available" message. UEFI boot is enabled in system settings. After reading this thread I see suggestions of running 'pve-efiboot-tool refresh'. How am I supposed to do this if I can't boot into the system? I'm going to attempt to run this in a TTY before completing the install. Any better suggestions?

Thank you for everyone who put this release together!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!