Proxmox VE 6.0 released!

Michael Herf

New Member
May 31, 2019
4
0
1
44
Hi,

As I understand, SIMD is used for fletcher4 checksums(maybe I am wrong).

cat /proc/spl/kstat/zfs/fletcher_4_bench | grep fastest
fastest avx512f avx2

You can change the checksum algo to sha256(create a test dataset and see if with sha256 fio will get better results).

Good luck!
Thanks! The checksum algorithm doesn't seem to matter, but on my slower Xeon, when you do

cat /proc/spl/kstat/zfs/vdev_raidz_bench and look at the "scalar" (non-SIMD) row:

gen_p (RAID-Z) is 1.13GB/sec, and gen_pq (RAID-Z2) is 290MB/sec, and gen_pqr (RAID-Z3) is only 132MB/sec. SIMD makes everything 5-7x faster.
 

Aniket

New Member
Jul 23, 2019
3
0
1
22
401 Unauthorized
Ign:10 https://enterprise.proxmox.com/debian/pve buster/pve-enterprise Translation-en
Hit:11 http://download.proxmox.com/debian/ceph-luminous buster InRelease
Hit:12 http://download.proxmox.com/debian/corosync-3 stretch InRelease
Reading package lists...
W: The repository 'https://enterprise.proxmox.com/debian/pve buster Release' does not have a Release file.
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/pve-enterprise/binary-amd64/Packages 401 Unauthorized
E: Some index files failed to download. They have been ignored, or old ones used instead.
TASK ERROR: command 'apt-get update' failed: exit code 100

I'm getting this error while updating proxmox 5.4 to 6
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
2,672
233
63
@Mikepop, please open up a new thread and provide more information about your setup.
 

Aniket

New Member
Jul 23, 2019
3
0
1
22
@Aniket : you need a subscription for the enterprise repository - see https://pve.proxmox.com/wiki/Package_Repositories
Either purchase one or use the no-subscription repository
401 Unauthorized [IP: 94.136.30.185 443]
Hit:7 http://download.proxmox.com/debian/corosync-3 stretch InRelease
Reading package lists...
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 94.136.30.185 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
TASK ERROR: command 'apt-get update' failed: exit code 100

I used no-subscription repository but the error is same i even upgraded repository
 
Last edited:

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
1,806
172
63

Aniket

New Member
Jul 23, 2019
3
0
1
22
you might have added the no-subscription repo, but the line above is a sure indicator that you still have the enterprise repo (or hostname) configured in one of:
* /etc/apt/sources.list
* /etc/apt/sources.list.d/*list
files
Thank You!!
It worked...got the update.
 

vanes

New Member
Nov 23, 2018
15
1
3
35
The first can be done like (for UEFI):
Code:
echo -n ' elevator=none' >> /etc/kernel/cmdline
pve-efiboot-tool refresh
Did this, after reboot sheduler still mq-deadline
Code:
root@j4205:~# for blk in /sys/block/s*; do echo -n "$blk: "; cat "$blk/queue/scheduler"; done
/sys/block/sda: [mq-deadline] none
/sys/block/sdb: [mq-deadline] none
/sys/block/sdc: [mq-deadline] none
/sys/block/sdd: [mq-deadline] none
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,388
201
63
South Tyrol/Italy
huh, can you post your
Code:
cat /proc/cmdline
output (to see if the cmdline really git picked up)

But we can go the udev way doing:
Code:
echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/scheduler}="none"' > /etc/udev/rules.d/42-io-sched.rules
This normally gets automatically applied by the udev device manager on startup, to do it now once live you can execute:
Code:
udevadm control --reload-rules && udevadm trigger
 

vanes

New Member
Nov 23, 2018
15
1
3
35
huh, can you post your
Code:
cat /proc/cmdline
Code:
root@j4205:~# cat /proc/cmdline
initrd=\EFI\proxmox\5.0.15-1-pve\initrd.img-5.0.15-1-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs elevator=none
But we can go the udev way doing:
Code:
echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/scheduler}="none"' > /etc/udev/rules.d/42-io-sched.rules
This normally gets automatically applied by the udev device manager on startup, to do it now once live you can execute:
Code:
udevadm control --reload-rules && udevadm trigger
This did tha trick. Should i remove "elevator=none" from "/etc/kernel/cmdline"? Then pve-efiboot-tool refresh
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,388
201
63
South Tyrol/Italy
Should i remove "elevator=none" from "/etc/kernel/cmdline"? Then pve-efiboot-tool refresh
Yes, in my opinion it would be probably better to do this. So only one way is used to set the IO scheduler, else if the meaning changes (or multiqueue works again) it could have strange side effects, easy to forget that it was done :)
 
Jul 2, 2019
8
0
1
53
Hmm, I'd not advise to it but as long as you can guarantee that you do not need to add/delete nodes you should be fine - it can be possible but it's a bit hacky and your basically on your own if you need to do cluster changes.
And this really should be a temporary solution, though!
Hi,

Just in terms of this - for Production clusters running 5.4 and trying to ensure no downtime, do you have a process that you may have tested in order to add a 6.0 host to a 5.4 cluster (with ZFS pools), and 'live-migrate' VMs from a 5.4 host to a 6.0 host within that cluster?

Or is there no way to do this safely except for a backup/ZFS send/shutdown VM/final ZFS incremental send to new host/start VM?

Kind regards,

Angelo.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,388
201
63
South Tyrol/Italy
Just in terms of this - for Production clusters running 5.4 and trying to ensure no downtime, do you have a process that you may have tested in order to add a 6.0 host to a 5.4 cluster (with ZFS pools), and 'live-migrate' VMs from a 5.4 host to a 6.0 host within that cluster?

Or is there no way to do this safely except for a backup/ZFS send/shutdown VM/final ZFS incremental send to new host/start VM?
There's live migration of VMs with local disks. In 5.4 it's only available through the CLI, in 6.0 it'd now also be exposed through the Web GUI.
See:
Code:
qm help migrate
A use of this would look like
Code:
qm migrate <VMID> <TARGETNODE> --online -with-local-disks -targetstorage <TARGETSTORAGE>
"targetstorage" is optional, it'll default to the one of the source, if available.

With this you should be able to live migrate with local disks, but I would still test it (you can test it from 5.4 -> 5.4 as a start, as in the backend not much changed regarding this). Create a test VM with similar properties than your real production ones and just try it out.
 
Jul 2, 2019
8
0
1
53
There's live migration of VMs with local disks. In 5.4 it's only available through the CLI, in 6.0 it'd now also be exposed through the Web GUI.
See:
Code:
qm help migrate
A use of this would look like
Code:
qm migrate <VMID> <TARGETNODE> --online -with-local-disks -targetstorage <TARGETSTORAGE>
"targetstorage" is optional, it'll default to the one of the source, if available.

With this you should be able to live migrate with local disks, but I would still test it (you can test it from 5.4 -> 5.4 as a start, as in the backend not much changed regarding this). Create a test VM with similar properties than your real production ones and just try it out.
Thanks Thomas.

OK - so if I understand the live migrate with-local-disk option correctly (via the CLI), this isn't dependent on the hosts being members of the same cluster? i.e. we can do this between any two hosts we designate, whether they are in the same cluster or not?

Regards,

Angelo.
 
Jul 2, 2019
8
0
1
53
Hi Thomas,

Thanks for your reply :)

So, the bottom line is we need to make a new 6.0 host part of a cluster that currently comprises 5.4. hosts, in order to enable us to live migrate VMs.

I'm assuming that for this to happen, we would need to upgrade Corosync on all 5.4 hosts in the cluster first? Is that correct? Or not?

We do NOT currently have any VMs configured for HA, so our only need to make a new 6.0 host part of the cluster is to:

1. Start deployment of 6.0
2. Live migrate VMs from 5.4 hosts to 6.0 hosts

If we do need to upgrade Corosync on the 5.4 hosts as well, is there any impact on the running VMs/running cluster during this process? And any other risks we need to take into account?

Kind regards,

Angelo.
 

ilianko

New Member
Jul 24, 2019
1
0
1
49
testing proxmox,
unfortunately 6.0-1 (new install) stucks on qm importdisk ( or move from web gui) to lvm-thin, import is very slow or kills the server...
on 5.4 qm importdisk works normal
 

RobFantini

Well-Known Member
May 24, 2012
1,626
26
48
Boston,Mass
@Aniket : you need a subscription for the enterprise repository - see https://pve.proxmox.com/wiki/Package_Repositories
Either purchase one or use the no-subscription repository
On a test system - we do not have a subscription and have a pve.list file with
Code:
deb http://download.proxmox.com/debian/pve buster pve-no-subscription
during the upgrade this file was created with these contents
Code:
cat pve-enterprise.list.dpkg-dist

deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
so we just comment out that line.


you are correct that one should use the correct sources list from pve.

however even with the correct list in place a wrong list gets created.

it is just a minor thing to watch out for and fix.

PS:
thank you for the support!
 

rengiared

Member
Sep 8, 2010
94
0
6
Austria
I mean is it really down and out, i.e., gone forever? Then I'd just remove it before the upgrade.
the ceph upgrade wen't quite smooth through, altough i haven't rebooted the nodes so far and atm i'm getting the bluefs spillover messages like mentioned in another thread. i have to look deeper into this problem if it has something to do with my "low spec" ceph cluster or something else
Nevertheless thanks for the instruction and help
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!