Proxmox VE 6.1 released!

okieunix1957

Member
Feb 11, 2020
71
1
8
63
This first time I have ever work with Promox. The installation is very easy and quick. But I am a big fan of ZFS for lots of good reason. I am former Sun Engineer. Been working with Sun since 1985. I am not surprised your supporting ZFS but I am surprised you are not allowing users to decide the type of Filesystem to put on as a boot disk. It be a lot easier and less painful should you at least allow that during installation. You certainly done a great job on the GUI interface and certainly kept the convention there. I am sure the decision to use Debian over many other linux distro, would been nicer to seen it on Solaris x86 which nowdays has ZFS built-in. Along with a lot of cool network capabilities. This will take a little getting used to but like what I see so far.
 

aaron

Proxmox Staff Member
Staff member
Jun 3, 2019
652
54
28

ChristianW

New Member
Feb 13, 2020
6
0
1
41
It was MTU issue in my case.
Hi,
we just upgraded to 6.1 last night and have only sporadic monitoring data as well now.
Probably same issue here:

Code:
08:55:30.176550 IP 172.16.200.250.47503 > 172.16.200.6.2003: UDP, bad length 10348 > 1472
08:55:30.242207 IP 172.16.200.250.47871 > 172.16.200.6.2003: UDP, bad length 50789 > 1472
08:55:30.245716 IP 172.16.200.250.47871 > 172.16.200.6.2003: UDP, bad length 50654 > 1472
08:55:30.246359 IP 172.16.200.250.47871 > 172.16.200.6.2003: UDP, bad length 8260 > 1472
08:55:30.580892 IP 172.16.200.250.60298 > 172.16.200.6.2003: UDP, bad length 2793 > 1472
MTU ist standard Ethernet (1500) while both machines are in the same subnet.

Any ideas?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,412
367
103
South Tyrol/Italy
shop.maurer-it.com
we just upgraded to 6.1 last night and have only sporadic monitoring data as well now.
Probably same issue here:

Actually this can be normal, this should be the "PMTUd" from kronosnet doing it's work to automatically detect the highest MTU it can used through the whole network path it sents through. It normally starts pretty high and then binary searches it's way to the best MTU. Or do you have any issues or corosync/knet complaining?
 

ChristianW

New Member
Feb 13, 2020
6
0
1
41
It normally starts pretty high and then binary searches it's way to the best MTU. Or do you have any issues or corosync/knet complaining?
Sounds good. Right now we are missing monitor data:

Here before upgrade:

Bildschirmfoto 2020-02-13 um 09.39.03.png

After upgrade:

Bildschirmfoto 2020-02-13 um 09.39.28.png

Any idea how to work around it?

Thank you for your this quick reply!
 

POL CRIOLLO

New Member
Feb 4, 2020
17
0
1
44
Not Install Promox 6.1 (zfs-0) 04 disk - Vmware 15 Pro
My greetings.
I am doing some tests, you need to install proxmox 6.1 in a virtual machine of vmware 15 pro, I have 04 disks and I select ZFS-0, but when the installation process in the partitioning, the process blinks the error gives me an error and I must restart.
I'm matching the screenshots of the screen.
Att.
Paul Creole.
 

Attachments

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,412
367
103
South Tyrol/Italy
shop.maurer-it.com
Any idea how to work around it?
I mean there seems to be obviously a problem, but not to sure if it's direct related to Proxmox VE.
Is this fed from the PVE external metric server plugins? Or some other data gathering?

If it's from PVE then maybe open a new thread with some more details (ext. metric plugin config) ..
 

okieunix1957

Member
Feb 11, 2020
71
1
8
63
Is it possible that you missed the first step after accepting the license? The options button next to the target hard disk selection lets you choose the root FS.

See https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_using_the_proxmox_ve_installer
Thanks for that tip, I got it installed just fine. BTW, I installed 6.1.3 but went back to down and see only 6.1.1 available. What in the world in going on here. You get 6.1.3 not only get 6.1.1. Not very professional here.

1581881889667.png

Wish I taken the screen shot when 6.1.3 was available.

root@server1:/usr/bin# ./pveversion
pve-manager/6.1-3/37248ce6 (running kernel: 5.3.10-1-pve)
root@server1:/usr/bin#
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,412
367
103
South Tyrol/Italy
shop.maurer-it.com
BTW, I installed 6.1.3 but went back to down and see only 6.1.1 available. What in the world in going on here. You get 6.1.3 not only get 6.1.1. Not very professional here.
One is the ISO version and the other the Proxmox VE manager version, two different things.
 

okieunix1957

Member
Feb 11, 2020
71
1
8
63
This first time I have ever work with Promox. The installation is very easy and quick. But I am a big fan of ZFS for lots of good reason. I am former Sun Engineer. Been working with Sun since 1985. I am not surprised your supporting ZFS but I am surprised you are not allowing users to decide the type of Filesystem to put on as a boot disk. It be a lot easier and less painful should you at least allow that during installation. You certainly done a great job on the GUI interface and certainly kept the convention there. I am sure the decision to use Debian over many other linux distro, would been nicer to seen it on Solaris x86 which nowdays has ZFS built-in. Along with a lot of cool network capabilities. This will take a little getting used to but like what I see so far.
I found the other ZFS thanks to Aaron feedback. I discovered that if I make a mistake like a typo in the hostname that I literally have to rebuild, While inside the at OS level I made the corrections the GUI still did not reflect correctly. Also PLEASE put in future releases UNJOIN command or Button, if the CLI is so unstable why introduce it by that pvecm delnode. Re-installation is NOT doable in a production environment especially if you have 100's of vm to deal with. Not very slick at all. Every cluster I have ever been on has a way to unjoin a node without being so destructive as this one. You have a good product, Just put the word CAUTION in front before starting...

pve-manager/6.1-3/37248ce6 (running kernel: 5.3.10-1-pve)

The above is the version i am running.

I look back at the proxmox site and see this:

View attachment 15034

What happen to 6.1.3? Now you can only get 6.1.1 I am fixing to migrate from
Thanks for that tip, I got it installed just fine. BTW, I installed 6.1.3 but went back to down and see only 6.1.1 available. What in the world in going on here. You get 6.1.3 not only get 6.1.1. Not very professional here.

View attachment 15033

Wish I taken the screen shot when 6.1.3 was available.

root@server1:/usr/bin# ./pveversion
pve-manager/6.1-3/37248ce6 (running kernel: 5.3.10-1-pve)
root@server1:/usr/bin#
Question if I did miss that box, Is there way to fix that without having to rebuild?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,412
367
103
South Tyrol/Italy
shop.maurer-it.com
What happen to 6.1.3? Now you can only get 6.1.1
what are you talking about??

Just use the 6.1-1 ISO it will install a system where the pve-manager has version "6.1-3", which is the version you see on top of the webinterface, as already said:

One is the ISO version and the other the Proxmox VE manager version, two different things.
Also, even if you have an older ISO, you can just update after installation to the latest release: https://pve.proxmox.com/wiki/Package_Repositories

I found the other ZFS thanks to Aaron feedback. I discovered that if I make a mistake like a typo in the hostname that I literally have to rebuild, While inside the at OS level I made the corrections the GUI still did not reflect correctly. Also PLEASE put in future releases UNJOIN command or Button, if the CLI is so unstable why introduce it by that pvecm delnode. Re-installation is NOT doable in a production environment especially if you have 100's of vm to deal with. Not very slick at all. Every cluster I have ever been on has a way to unjoin a node without being so destructive as this one. You have a good product, Just put the word CAUTION in front before starting...
Please start to read the documentation, which would be a good thing to do if one sets this up for a production environment :)
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#pvecm_separate_node_without_reinstall
https://pve.proxmox.com/pve-docs/pve-admin-guide.html
 

okieunix1957

Member
Feb 11, 2020
71
1
8
63
what are you talking about??

Just use the 6.1-1 ISO it will install a system where the pve-manager has version "6.1-3", which is the version you see on top of the webinterface, as already said:



Also, even if you have an older ISO, you can just update after installation to the latest release: https://pve.proxmox.com/wiki/Package_Repositories



Please start to read the documentation, which would be a good thing to do if one sets this up for a production environment :)
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#pvecm_separate_node_without_reinstall
https://pve.proxmox.com/pve-docs/pve-admin-guide.html
Thanks for the clarification But I was running 6.1.3 until I updated this evening.
1582102312834.png
The version 6.1.3 now it's running 6.1-7.

pve-manager/6.1-7/13e58d5e (running kernel: 5.3.18-1-pve)

Now that was big confusing... :), So this must have updated the system after I did the apt-get dist-upgrade
I'll need to read the man pages to understand the different between dist-upgrade and apt-get upgrade.

Can you care to explain you distribution update cycle?

Phillip
 

Attachments

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
2,412
367
103
South Tyrol/Italy
shop.maurer-it.com
Can you care to explain you distribution update cycle?
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_system_software_updates
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_package_repositories

In general, Proxmox VE stack is rolling release. After internal testing packages go to pvetest, after we saw no issues there and got none grave reported they'll move to no-subscription repo, there they'll stay for a while and if again no issues pop up we move them to pve-enterprise, which is the most stable repository.
 

Mundo Digital

Member
Feb 19, 2020
47
0
6
30
Brasil
./spice-example.sh -u Felipe @ pve -p 100 pve 192.168.0.5
curl: (7) Falha ao conectar-se à porta 192.168.0.5 8006: Tempo esgotado para conexão
 

brb8two

New Member
Dec 17, 2019
6
1
3
37
Which shutdown policy did you use?
Now reading the documentaion more carefully and not knowing where to change it, I had the default "Conditional " policy.
What I can't see in the documentation is how to change this policy.
I've now found in ProxMox under Datacenter--> Options --> HA Settings
This lead me to find more documenation, under Apendix C;
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#datacenter_configuration_file

Understanding how this affects the Cluster, I think it to be more appropreate if it were a "HA Groups" Shutdown Policy not a cluster wide policy. I'll raise a feature request.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!