Proxmox VE 6.0 beta released!

Hi,

How can I use ZFS encryption, I don't see any guidelines for that. Thank You Tiago
 
I cannot compare to what I had before, but PVE 6.0.2 (5.0.15-1-pve) has limited updload speed.

On the host (haven't tested in guests), upload is consistently at ~4.17Mbps while connected to a 1Gbps network (download is 800Gbps+). If I boot to a live CD (System Rescue CD 6.0.3) I get both upload and download between 800 and 900 Gbps.

I removed the bridge to eliminate possibilities, and the problem persists. NIC is Intel® I210-AT and only the first one is in use.

Given that the other threads suggest this is a problem with speedtest-cli, I tried with a public iperf server and one of them got me to 500-800Mbps.

So this something related to speedtest-cli with proxmox, kernel 5.x, and/or intel I210-AT adapters.

Should install a newer version of the driver?
 
Last edited:
Are there already any working Solutions for the Buster VM Problems?

Host with Debian Buster, PVE6 installed using the Repos, create a container with Buster Template, install MariaDB-Server and it won't start.
Same problem with other Tools like Dovecot.

In MariaDB it can be fixed by commenting out
  • ProtectSystem
  • PrivateDevices
  • ProtectHome
in /lib/systemd/system/mariadb.service

But this is only a quick & dirty fix.

I found some Informations using Google but no really recommended solutions.
 
My setup is pve6 latest beta root on zfs (raid10) with UEFI boot.
Need some help!
The grub way alone is not working for the systemd-boot used with ZFS on UEFI installations. We documented a how to in the "Host System Administration" chapter of our docs project.
Our official docs mirror is not yet updated to 6.0 (as it's not yet released), but you can use the "Documentation" button on your local 6.0 installation to get to it, there should be a "Editing the kernel commandline" section for both GRUB and systemd-boot

Edit: now the docs are publicly available at: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_edit_kernel_cmdline
 
Last edited:
@0C0C0C

Not really a PVE problem, more related to container security.

You can use systemd overrides in a clean manner (example for Dovecot):

Code:
# cat /etc/systemd/system/dovecot.service.d/PrivateTmp.conf
[Service]
PrivateTmp=false
ProtectSystem=false
PrivateDevices=false
# systemctl daemon-reload
# systemctl restart dovecot
 
  • Like
Reactions: 0C0C0C
Is it possible to get newer kernels for new AMD Zen 2 processor support? even if its an unofficial kernel build. Or can we install the ubuntu kernel?

If all that is not possible what modules do I need to enable to build my own kernel?

Our kernel is based on the Ubuntu Kernel, currently on the 5.0 based kernel used by Ubuntu 19.04 "Disco". Ubuntu and we back port HW support patches as needed, so you should normally not have any problems regarding this.

Does something not work for you?
 
Are there already any working Solutions for the Buster VM Problems?
Virtual Machines should not have such problems. Do you mean Containers (CTs)?

For them one can use "unprivileged" CTs with the "nesting" Feature enabled, in a safe manner.
 
  • Like
Reactions: 0C0C0C
Virtual Machines should not have such problems. Do you mean Containers (CTs)?

For them one can use "unprivileged" CTs with the "nesting" Feature enabled, in a safe manner.

Ye, sry, i mean CTs not VMs. I missed the Nesting Feature, now it's working. Thanks ;)
 
Our kernel is based on the Ubuntu Kernel, currently on the 5.0 based kernel used by Ubuntu 19.04 "Disco". Ubuntu and we back port HW support patches as needed, so you should normally not have any problems regarding this.

Does something not work for you?
Zen 2 isn't working with the ubuntu 19.04 kernel. Many issues. rdrand, systemd fails, xhci fails. I am assuming ubuntu will backport the patches? If not can I install any ubuntu kernel or do I need special modules to be loaded?

I am having the 4k nvme install bug too. The one where it attempts to create partitions too large for the drive. There was a dedicated thread on the issue from 2017. Seems like it was not patched properly.

I ended up installing debian 9 over the internet to get everything working with proxmox 5.4 but I won't be able to upgrade until the 19.04 kernel issues are fixed.
 
Our kernel is based on the Ubuntu Kernel, currently on the 5.0 based kernel used by Ubuntu 19.04 "Disco". Ubuntu and we back port HW support patches as needed, so you should normally not have any problems regarding this.

Does something not work for you?

+1 for a newer kernel. GVT-g for 8th gen (Coffe Lake) processors does not work with 5.0, but has been upstreamed in 5.1
 
rdrand, systemd fails
That's not the kernel, that's the CPU and its firmware, where updates from AMD (microcode) and/or the mainboard (over their firmware) vendors are proposed, IIRC.

If not can I install any ubuntu kernel or do I need special modules to be loaded?
As long as you do not use ZFS you're somewhat fine, but you won't get real help here with a self-compiled kernel (as no other tests/uses it, at least no one from us), just FYI.

. I am assuming ubuntu will backport the patches?
Yes, and we can and will too, if you can point at a specific set of fixes we did not yet included you can point us to it too and we can take a look at it.

+1 for a newer kernel. GVT-g for 8th gen (Coffe Lake) processors does not work with 5.0, but has been upstreamed in 5.1

Newer kernel won't come for 6.0, but it's planned for a 6.x, we'll probably go then to the future 20.04 LTS kernel one available and stable, which will for sure be newer than 5.1 :)
 
That's not the kernel, that's the CPU and its firmware, where updates from AMD (microcode) and/or the mainboard (over their firmware) vendors are proposed, IIRC.
Yes but there is a systemd patch to make boot not dependent on rdrand. OpenSuse tumbleweed already has a build with the patch.

Are you familiar with the 4k nvme issue I am referring to or should I attempt to install the proxmox iso again and document the issue here for you?
 
Yes but there is a systemd patch to make boot not dependent on rdrand. OpenSuse tumbleweed already has a build with the patch.

Ah so not a kernel patch ;) That's already fixed in the systemd version we ship, from the "systemd (241-4)" changelog entry:

* random-util: Eat up bad RDRAND values seen on AMD CPUs.
Some AMD CPUs return bogus data via RDRAND after a suspend/resume cycle
while still reporting success via the carry flag.
Filter out invalid data like -1 (and also 0, just to be sure).
(Closes: #921267)

Did you even tried booting the ISO on such a system out?

Are you familiar with the 4k nvme issue I am referring to or should I attempt to install the proxmox iso again and document the issue here for you?

No, sorry, I do not know which exact issue you're referring to, but we did quite some improvements regarding NVMe devices in the 6.0 installer, so if you can reproduce it with the Beta ISO or a hopefully soon coming final one we'd appreciated a detailed report in a own thread here or over at https://bugzilla.proxmox.com/
 
I have a problem with an upgraded 5.4 installation with a bunch of LXC containers. The containers are still running from the stretch template, and are privileged. I have had installed docker inside them, with some apparmor permissions in their /etc/pve/lxc/xyz.conf. This worked great with Proxmox 5.4, but with 6-beta, i have some problems.

The first problem is that the docker networking breaks periodically. E.g. i have a traefik installed in one container, and i can access the traefik with my browser, but traefik is unable to reach its backends, which are docker containers running in the same docker network. I usually notice this in the morning of each day. This does not happen for all LXC containers, although all LXC containers have similar configuration (same template, also running privileged, same AppArmor settings). After a full reboot of the hypervisor, everything's fine until the next morning. Why a full reboot you say?

That's why:
The second problem is that the Proxmox web frontend itself is very slow and unresponsive each morning. Stopping a LXC container results in a timeout (although a 'pct stop' command via SSH is stopping the LXC very reliable and fast), drop-down fields which are filled dynamically, e.g. the template selection in the "Create CT" wizard are filled only after a long delay or not at all. Login to the web frontend takes around 30 seconds, and so on.

Has anyone noticed similar issues? Maybe there is a cronjob which breaks things until the next reboot, but so far i have not found anything.
 
Very funny, pve5to6 declares:

FAIL: Unsupported SSH Cipher configured for root in /root/.ssh/config: 3des
With a cat /root/.ssh/config:
Ciphers blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc

The fun thing is that this file was written by PVE5 itself, as is, at its install :)
It should have been left empty or nonexistent.

While talking about ssh I see that the public key for user root created by pve5 at its install was an RSA 2048 (which is OK), but since ed25519 is supported in debian9/10, more secure and very fast, it probably should be the new choice.
 
  • Like
Reactions: DerDanilo
... drop-down fields which are filled dynamically, e.g. the template selection in the "Create CT" wizard are filled only after a long delay or not at all...

I have noticed the following log lines during this error in journald:


Jul 17 08:59:39 sennas pmxcfs[1414]: [ipcs] crit: connection from bad user 1000! - rejected
Jul 17 08:59:39 sennas pmxcfs[1414]: [libqb] error: Error in connection setup (/dev/shm/qb-1414-14478-26-4nD9IP/qb): Unknown error -1 (-1)
Jul 17 08:59:39 sennas pmxcfs[1414]: [ipcs] crit: connection from bad user 1000! - rejected
Jul 17 08:59:39 sennas pmxcfs[1414]: [libqb] error: Error in connection setup (/dev/shm/qb-1414-14478-26-XQI0SN/qb): Unknown error -1 (-1)
Jul 17 08:59:39 sennas pmxcfs[1414]: [ipcs] crit: connection from bad user 1000! - rejected
Jul 17 08:59:39 sennas pmxcfs[1414]: [libqb] error: Error in connection setup (/dev/shm/qb-1414-14478-26-CaW52L/qb): Unknown error -1 (-1)
Jul 17 08:59:39 sennas pmxcfs[1414]: [ipcs] crit: connection from bad user 1000! - rejected
Jul 17 08:59:39 sennas pmxcfs[1414]: [libqb] error: Error in connection setup (/dev/shm/qb-1414-14478-26-G0RudK/qb): Unknown error -1 (-1)
 
Very funny, pve5to6 declares:

FAIL: Unsupported SSH Cipher configured for root in /root/.ssh/config: 3des
With a cat /root/.ssh/config:
Ciphers blowfish-cbc,aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc

The fun thing is that this file was written by PVE5 itself, as is, at its install :)
It should have been left empty or nonexistent.

While talking about ssh I see that the public key for user root created by pve5 at its install was an RSA 2048 (which is OK), but since ed25519 is supported in debian9/10, more secure and very fast, it probably should be the new choice.

Hi, same problem here.
How can I solve it? just delete 3des word from config file?
 
How can I solve it? just delete 3des word from config file?
Yes, just remove the offending ciphers. If you want you could replace it with the current, updated for Buster one:

Code:
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com
 
  • Like
Reactions: commander-in-chief
Well, the fun continues.

Before I did a clean install of 6.0 beta, I backed up my ansible playbooks from 5.4 VE which worked.

So did a 'apt-get install ansible' and 'apt-get python-pip. Then did a 'pip install proxmoxer'

When I run the playbook to create a VM, I get the following error:

'authorization on proxmox cluster failed with exception: invalid literal for float(): 6-0.1'

Did API access change from 5.4 to 6.0 beta?

I'm not able to reproduce your issue. If you (or others having this problem) like to help, just comment in https://github.com/swayf/proxmoxer/issues/79 For me, basic authentication and interaction with the cluster works.

Edit: Nevermind, found and fixed it. Have fun ;)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!