Proxmox VE 6.0 beta released!

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
No bug, as stated in the link Stoiko posted link "su" altered it's behavior, quote for convenience:
  • The su command in buster is provided by the util-linux source package, instead of the shadow source package, and no longer alters the PATH variable by default. This means that after doing su, your PATH may not contain directories like /sbin, and many system administration commands will fail. There are several workarounds:
    • Use su - instead; this launches a login shell, which forces PATH to be changed, but also changes everything else including the working directory.

    • Use sudo instead. sudo still runs commands with an altered PATH variable.
      • To get a regular root shell with the correct PATH, you may use sudo -s.

      • To get a login shell as root (equivalent to su -), you may use sudo -i.
    • Put ALWAYS_SET_PATH yes in /etc/login.defs to get an approximation of the old behavior.

    • Put the system administration directories (/sbin, /usr/sbin, /usr/local/sbin) in your regular account's PATH (see EnvironmentVariables for help with this).
 
  • Like
Reactions: Gaia

vanes

New Member
Nov 23, 2018
15
1
3
35
Hi guys! I need some help. I installed proxmox 6 beta root on zfs raid1 on 2 sata ssd (uefi boot). Everything seems fine, but how is disk replace procedure now? Is it same as in previous version withoun uefi boot? Second question is about trim. Trim works automaticaly or need to be set up, for autotrim?
Sorry for my english =)
 
Sep 21, 2016
33
2
8
75
Hi guys! I need some help. I installed proxmox 6 beta root on zfs raid1 on 2 sata ssd (uefi boot). Everything seems fine, but how is disk replace procedure now? Is it same as in previous version withoun uefi boot? Second question is about trim. Trim works automaticaly or need to be set up, for autotrim?
Sorry for my english =)
Why would the disk replace commands change? Those are native ZFS commands, check how your disks are named in your pool and you know what replace commands to run.

For trim, read: https://github.com/zfsonlinux/zfs/releases and search for Trim.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
I meant boot partitions when using ZFS root via UEFI (proxmox 6 using systemd-boot instead of grub when using uefi boot on zfs). How to make new disk bootable? If possible, write some manual please
Initial documentation patches regarding this all have been already sent and are expected to be applied in their final form soon.
There will be a small tool which can assist one in replacing a Device and getting the Boot stuff to work again for the new one, which is naturally also possible to do by hand.
 
  • Like
Reactions: noname and vanes

edwin eefting

New Member
Jan 16, 2017
8
0
1
43
Yes finally! I'm needing ZFS TRIM support, since my disks are wearing too fast now. (no trim means writeamplification)

I'll wait a few weeks and if there are no huge problems i'll upgrade my cluster.

Is it possible to only update corosync clusterwide, and then upgrade one node to 6.0 while the rest stays on 5.4? This way i can test without too much risk.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,468
540
113
For some reason, after Hetzner inserted a USB Stick directly the installation also came up in UEFI mode. (Seems there are timeouts when using virtual media on uefi)

I can now either boot via uefi and efibootmgr works and /sys/firmware/efi is there or i can boot without, then it boots via grub. Is that intended?
yes
 

noname

New Member
May 14, 2014
14
0
1
Initial documentation patches regarding this all have been already sent and are expected to be applied in their final form soon.
There will be a small tool which can assist one in replacing a Device and getting the Boot stuff to work again for the new one, which is naturally also possible to do by hand.
if I understand correctly, can be done via proxmox interface ?
 

jdancer

New Member
May 26, 2019
12
0
1
49
Can someone else confirm that /usr/sbin/ceph-disk exists? It shows up in 'apt-file list ceph-osd' but not in /usr/sbin.

I did check my other 2 nodes and no ceph-disk.

I also did a 'apt-get install --reinstall ceph-osd' but still no ceph-disk.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
Can someone else confirm that /usr/sbin/ceph-disk exists? It shows up in 'apt-file list ceph-osd' but not in /usr/sbin.

I did check my other 2 nodes and no ceph-disk.

I also did a 'apt-get install --reinstall ceph-osd' but still no ceph-disk.
No, ceph-disk was deprecated with Ceph Mimic and fully dropped with Nautilus, it was replaces with ceph-volume.
 

jdancer

New Member
May 26, 2019
12
0
1
49
Well, the fun continues.

Before I did a clean install of 6.0 beta, I backed up my ansible playbooks from 5.4 VE which worked.

So did a 'apt-get install ansible' and 'apt-get python-pip. Then did a 'pip install proxmoxer'

When I run the playbook to create a VM, I get the following error:

'authorization on proxmox cluster failed with exception: invalid literal for float(): 6-0.1'

Did API access change from 5.4 to 6.0 beta?
 

bogo22

Member
Nov 4, 2016
39
0
6
Initial documentation patches regarding this all have been already sent and are expected to be applied in their final form soon.
There will be a small tool which can assist one in replacing a Device and getting the Boot stuff to work again for the new one, which is naturally also possible to do by hand.
I always followed the entry in the pve-wiki (here), hopefully it will be updated too if pve6 stable is released.
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,500
212
63
South Tyrol/Italy
'authorization on proxmox cluster failed with exception: invalid literal for float(): 6-0.1'
Yes, the "GET /nodes/{nodename}/version" call changed it's return format a bit, from PVE 5:

Code:
# pvesh get /nodes/dev5/version --output-format=json-pretty
{
   "release" : "11",
   "repoid" : "6df3d8d0",
   "version" : "5.4"
}
to PVE 6:
Code:
pvesh get /nodes/dev6/version --output-format=json-pretty
{
   "release" : "6.0",
   "repoid" : "c148050a",
   "version" : "6.0-1"
}
In short, the relase is now the main Proxmox VE relase (6.0 or 6.1 for example) repoid stayed the same and version is the full current manager version, (i.e., what the concatenation of "release" and "version" from the old 5.X call)

see https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=b597d23d354665ddea247c3ad54ece1b84921768 for full details.
 
  • Like
Reactions: FinnTux

jdancer

New Member
May 26, 2019
12
0
1
49
Yes, the "GET /nodes/{nodename}/version" call changed it's return format a bit, from PVE 5:

Code:
# pvesh get /nodes/dev5/version --output-format=json-pretty
{
   "release" : "11",
   "repoid" : "6df3d8d0",
   "version" : "5.4"
}
to PVE 6:
Code:
pvesh get /nodes/dev6/version --output-format=json-pretty
{
   "release" : "6.0",
   "repoid" : "c148050a",
   "version" : "6.0-1"
}
In short, the relase is now the main Proxmox VE relase (6.0 or 6.1 for example) repoid stayed the same and version is the full current manager version, (i.e., what the concatenation of "release" and "version" from the old 5.X call)

see https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=b597d23d354665ddea247c3ad54ece1b84921768 for full details.
I'm guessing I have to fill out an issue request to proxmoxer maintainers?
 

martin.g

New Member
Apr 22, 2013
16
1
3
I'm guessing you have to change to change your API requests methods :)
Therefore Betas are :)
 
May 18, 2019
75
2
8
Los Angeles, CA USA
apt update:

Code:
Err:5 https://enterprise.proxmox.com/debian/pve buster InRelease                       
  401  Unauthorized [IP: 66.70.154.81 443]
Reading package lists... Done
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease  401  Unauthorized [IP: 66.70.154.81 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details
node is registered with the key. fresh buster install, followed by pve, according to instructions. key has been re-issued, shows as active.

tried
Code:
apt-get update -o Debug::Acquire::https=true
apt-key list shows the pve 6 key
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!