Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

martin

Proxmox Staff Member
Staff member
Apr 28, 2005
754
1,742
223
The upcoming Proxmox VE 5.1 will get production ready Ceph Luminous LTS, a new 4.13 Linux Kernel, latest ZFS and LXC 2.1. Our beta repositories already contain quite stable Ceph 12.2.x packages and the GUI integration includes a new cool features, e.g. creating Ceph storages for VMs and Containers with just one click on the "Create Ceph Pool" wizard. (This will also copy all needed authentication keys.)

create-ceph-pool-and-storage.png

The new Kernel and the new ZFS 0.7.x will be added as soon as possible.

In order to get latest testing packages, you have to use the following repositories and run as always "apt-get update && apt-get dist-upgrade":

Proxmox VE pvetest

deb http://download.proxmox.com/debian/pve stretch pvetest

Proxmox Ceph test
deb http://download.proxmox.com/debian/ceph-luminous stretch test

Please test and reports back!

Final Proxmox VE 5.1 is planned for October 2017 (mid/end).

__________________
Best regards,

Martin Maurer
Proxmox VE project leader
 
Ryzen 1700x will work just fine on 5.0 :)

I'm having frequent CPU soft lock issue on AMD Ryzen 1700x CPU with Gigabyte AX370 5 PCB (lastest BIOS 201709xx) and NVIDIA 1030 GPU (noveau driver black listed in modprobe).

Updated to PVE 5.0 pvetest channel didn't help either.
 
I'm having frequent CPU soft lock issue on AMD Ryzen 1700x CPU with Gigabyte AX370 5 PCB (lastest BIOS 201709xx) and NVIDIA 1030 GPU (noveau driver black listed in modprobe).

Updated to PVE 5.0 pvetest channel didn't help either.

Are you certain the nouveau driver is blacklisted correct? I have helped probably 5 people on this forum and it have solved it for them all.
 
  • Like
Reactions: chrone
Are you certain the nouveau driver is blacklisted correct? I have helped probably 5 people on this forum and it have solved it for them all.

I used your blacklist configuration and lsmod didn't list the nouveau module loaded. It still got soft lockup once a VM started.

Code:
# nano /etc/modprobe.d/blacklist-nouveau.conf
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

# echo options nouveau modeset=0 | tee -a /etc/modprobe.d/nouveau-kms.conf
# update-initramfs -u
# reboot

Reference:
https://forum.proxmox.com/threads/pve-5-0-locks-not-responding.35930/#post-176107
 
I used your blacklist configuration and lsmod didn't list the nouveau module loaded. It still got soft lockup once a VM started.

Code:
# nano /etc/modprobe.d/blacklist-nouveau.conf
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

# echo options nouveau modeset=0 | tee -a /etc/modprobe.d/nouveau-kms.conf
# update-initramfs -u
# reboot

Reference:
https://forum.proxmox.com/threads/pve-5-0-locks-not-responding.35930/#post-176107

I'm sorry to hear about that :(

The only difference then is the 1030 vs 710, but since nouveau is blacklisted it should really not matter I think?
 
  • Like
Reactions: chrone
I'm sorry to hear about that :(

The only difference then is the 1030 vs 710, but since nouveau is blacklisted it should really not matter I think?


Yeah, it's a bummer. Let me try again tomorrow with old AMD Radeon HD5850 GPU and on Fedora 26 or Ubuntu 17.04 with latest kernel. Hehe
 
oh.. are you using nouveau or nvidia-driver with ubuntu? It is also possible that ubuntu 16.04 uses a different nouveau version than debian 9 (proxmox)

Didn't check it out yet. Just use it as default out of the box. Was running some prime95 and cpu stress test. Now it's running the kill ryzen test over the weekend. Hehe
 
Will this make it easier (via GUI?) to create multiple different Ceph pools, such that I could create one all flash for faster storage, one all HDD for DVRs / backups etc.)?

Looking at the ceph screenshot, I noticed the word "Storages". I don't think "storage" is countable, so this may be grammatically incorrect. There is more discussion here (hoping not to start a grammar war!):
https://english.stackexchange.com/q...countable-if-it-means-warehouse-or-repository
 
Will this make it easier (via GUI?) to create multiple different Ceph pools, such that I could create one all flash for faster storage, one all HDD for DVRs / backups etc.)?

Ceph Luminous has a new feature called 'device classes', where each OSD get assigned a device class (it tries to auto-detect and assign HDD vs SSD!). you can then create a CRUSH rule only picking OSDs using a specific device class (this will be contained in our documentation, and CLI only for now). when creating a pool (GUI or CLI), you can select the CRUSH rule. so this is not yet fully stream lined, but should be fairly straight forward ;)

editing CRUSH rules is not easy to map onto the GUI and is a rather advanced feature, so it will probably stay CLI only for now.

Looking at the ceph screenshot, I noticed the word "Storages". I don't think "storage" is countable, so this may be grammatically incorrect. There is more discussion here (hoping not to start a grammar war!):
https://english.stackexchange.com/q...countable-if-it-means-warehouse-or-repository

(probably obviously ;)) not a native speaker, but I think "storages" as in "multiple storage definitions/configurations" is fine as per the last answer to that SE post?
 
  • Like
Reactions: chrone
Looking at the ceph screenshot, I noticed the word "Storages". I don't think "storage" is countable, so this may be grammatically incorrect.

thats correct (or, rather, storages is incorrect.) you would refer to multiple stores whereas storage is neither singular or plural. In context, they should probably be referred to as datastores. That said, I'm not a grammar teacher and storages doesnt bother me :)
 
I have a warning when I enter lxc command like "pct list" :
Code:
The configuration file contains legacy configuration keys.
Please update your configuration file!
I see that LXC recommand to use "lxc-update-config" but I have a strange error when I use it:
Code:
sed: preserving permissions for ‘/etc/pve/lxc/sedvpoNOc’: Operation not permitted
sed: preserving permissions for ‘/etc/pve/lxc/sedLUndDf’: Operation not permitted
sed: preserving permissions for ‘/etc/pve/lxc/sed3Eihpg’: Operation not permitted
...
Have you more information on this ?
 
I have a warning when I enter lxc command like "pct list" :
Code:
The configuration file contains legacy configuration keys.
Please update your configuration file!
I see that LXC recommand to use "lxc-update-config" but I have a strange error when I use it:
Code:
sed: preserving permissions for ‘/etc/pve/lxc/sedvpoNOc’: Operation not permitted
sed: preserving permissions for ‘/etc/pve/lxc/sedLUndDf’: Operation not permitted
sed: preserving permissions for ‘/etc/pve/lxc/sed3Eihpg’: Operation not permitted
...
Have you more information on this ?

the warning about legacy configuration keys is harmless (and will go away once you reboot / stop and start your container). no need to run lxc-update-config manually, pve-container will update the config to the new keys.
 
  • Like
Reactions: chrone

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!