Proxmox VE 5.0 beta1 released!

Server 2016 in nested virtualization mode, after install Hyper-V not starting. Hypervisor Error on blue screen. But the server 2008r2 work fine. 2012 not tested.
Auto start & bulk actions same not working with this error
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1431.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1437.
Use of uninitialized value $type in concatenation (.) or string at /usr/share/perl5/PVE/API2/Nodes.pm line 1444.
unknown VM type
 
i have same behaviour .. this is error msg

could you post the complete error message please? the image is cut off.. feel free to open a new thread.. if you boot in debug mode, you should be able to retrieve the whole installer log from /tmp/
 
Server 2016 in nested virtualization mode, after install Hyper-V not starting. Hypervisor Error on blue screen. But the server 2008r2 work fine. 2012 not tested.
Auto start & bulk actions same not working with this error
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1431.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/API2/Nodes.pm line 1437.
Use of uninitialized value $type in concatenation (.) or string at /usr/share/perl5/PVE/API2/Nodes.pm line 1444.
unknown VM type

already reported, and fixed in git.
 
I just did a clean install of 5.0-beta1 with ZFS, and built a few test containers. So far, so good. Keep up the good work!
 
Can you “stretch” the release roadmap to include BTRFS storage, at least as technology preview ?

Seriously ? Last time i've checked, BTRFS was far from being stable even for home users.

Much more interesting adding support for mdadm, on small nodes, could be useful and it's absolutely stable.
 
  • Like
Reactions: greydjin
Seriously ? Last time i've checked, BTRFS was far from being stable even for home users.

Much more interesting adding support for mdadm, on small nodes, could be useful and it's absolutely stable.

Googling “btrfs performance” or stability returns a lot of “false friend” results, due to obsolete benchmarks on kernel 2.x, 3.x . Things has changed a lot with the newest kernels 4.x.

The best thing is to try it yourself . Proxmox includes a recent version of btrfs-progs 4.6.1-1~bpo8+1.

I use it for 2 month in Proxmox 4.4, as directory storage , following tutorial https://www.internalfx.com/how-to-use-a-btrfs-raid-on-your-proxmox-server/.

So far no stability issues and performances are close to ZFS. It has successfully passed some hardcore 24h+ each mprime, fio, sysbench and phoronix test suite on Proxmox BTRFS. I should share them when I have more spare time.

Regarding stability, there are 2 relevant enterprise examples
1. Facebook
https://www.linux.com/news/learn/in...ok-uses-linux-and-btrfs-interview-chris-mason

2. SUSE Linux Enterprise BTRFS default filesystem

Kudos to the guys at SuSE, because for sure the best way to push its adoption is to place a statement like this “we are a profit company, not a group of Linux geeks, and we trust this filesystem to the point that it’s going to become our default one”. http://www.virtualtothecore.com/en/2016-btrfs-really-next-filesystem/
 
Last edited:
  • Like
Reactions: gkovacs
Most of the time, Internet is saying tons of stupid things.
The best place to know if BTRFS is stable, is by looking at the official BTRFS status page: https://btrfs.wiki.kernel.org/index.php/Status

There you can see that:

1) scrub + raid5/6 are unstable
2) defrag is not OK
3) compression is not OK
4) RAID 1 is not OK and could lead to irreversible read-only mode
5) RAID 10 is not OK and could lead to irreversible read-only mode
6) quotas are not OK
7) free-space-tree is not OK and fsck is unable to repair the filesystem

So, if you are OK using an unstable FS, where the most basic (and important) feature for a production use (like RAID, compression, fsck, quota, data scrubbing) are not supported , you are free to risk your data.

I would really love mdadm support. Easy to implement (everything is already implemented in PVE, only a change on the installer is needed), fast, stable, flexible. Yes, it lacks for bit-rot protection, but not all users needs this.
 
  • Like
Reactions: Drag_and_Drop
I don’t understand why do you artificial link your mdadm request to Btrfs. It makes no sense to advance into such dispute. I clearly expressed my opinion regarding Btrfs.

There is a connection: if you want an "insecure" system (mdadm doesn't have any bit-rot protection), use mdadm, not btrfs. btrfs is too much unstable to be used in anything except home users where data loss isn't important.

Regarding mdadm, I always install Proxmox on top of existing Debian https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie ( via https://pve.proxmox.com/wiki/Software_RAID ) . For 4k SSD, it’s a must to label your disks GPT, via rescue mode before installation, for proper partition alignment.

The same apply for BTRFS with the exception that BTRFS could lead to data loss.
I don't think would be a good idea adding btrfs even as technolohy preview (and will stay as "preview" for many, many, many, many years)
 
The same apply for BTRFS with the exception that BTRFS could lead to data loss.

There's a confusion here... The Debian/Proxmox OS installation is done on ext4/xfs partitions mdadm based. I prefer this setup also with ZFS, to avoid other non standard installation issues.

Btrfs is used on different partitions/disks

I don't think would be a good idea adding btrfs even as technolohy preview (and will stay as "preview" for many, many, many, many years)

Your prediction reminds my of a BTRFS benchmark from 2012
http://www.ilsistemista.net/index.p...irtual-machines-an-in-depth-look.html?start=6

“However, it seems that BTRFS really need an SSD to show its full potential. With extremely low access time and no fragmentation problems, SSD are the perfect platform to use BTRFS to its full extent without performance drop. Imagine a Linux-KVM server with loads of SSD space and BTRFS managing all them. Mmm... fantastic vision. However, mechanical HD are going to remain the primary storage tech for many coming years.

https://www.linux.com/news/high-ava...ss-continuity-says-dietmar-maurer-proxmox-cto Dietmar says in 2016

“Together with the wide availability of enterprise class NVMe SSD and 10 or 40Gbit networks, high-performance storage clusters are already in place – the upcoming next generation of enterprise SSD storage (Intel 3D Xpoint NVMe SSD) will boost SDS again to an even greater level.” Btw , Intel 3D Xpoint NVMe SSD has 500 K Random write 4k!!!

So, in 2016, from SSD point of view, we are “many, many, many, many years “ from 2012.

Time will tell. I rest my case.
 
Last edited:
Which is the relation between speed and data reliability?
You talk about speed and benchmark. I'm talking about data reliability

Yes, btrfs is fast but YOU WILL LOOSE DATA FOR SURE

It's an unstable filesystem and MUST NOT be used in production

There is no raid, no fsck, no data scrubbing, no compression and so on

Proxmox has native support for ZFS directly from the installer, is totally non sense to use an unstable filesystem when you have a more robust, stable and feature proof filesystem available like ZFS
 
I don't have the output, but as I remember looking at the screen after pushing alt+f2 it was when it tried to do a fdisk to partition it. It couldn't activate because it was busy and would use it at the next reboot. On the second tried I had removed all the partitions before I started, but same error occured.

I had the same issue today. Tried to install 5.0 beta1 over fresh test install of PVE 4.4. It failed on the disk partitioning operation (/dev/sda3...)
I saw your post earlier today so I have booted from live CD (knoppix), removed the partition table and just to be sure did a wiping of the first gigabyte of the disk:
dd if=/dev/zero of=/dev/sdx bs=10M count=100

On the next attempt to install 5.0 beta1 it worked fine, so the installer is confused with the already installed PVE 4.4 on the disk or it's partitioning.
 
upgrading ceph is currently not tested very much. you need to take some steps before upgrading, as shown in the Ceph Kraken release notes, then switch to our ceph repositories for stretch and upgrade the packages (e.g., by calling "pveceph install"). a fresh install of luminous should work out of the box - with the caveat that ceph luminous seems to have some issues with IPv6 (which we are currently fixing and/or reporting upstream). please report any issues you encounter.

When you say switch to our ceph repositories what does that mean? Comment out the repo in ceph.list and/or replace with what?
 
When you say switch to our ceph repositories what does that mean? Comment out the repo in ceph.list and/or replace with what?

"pveceph install" on PVE 5.0 will automatically change the /etc/apt/sources.list.d/ceph.list file to point to our ceph luminous beta repository. you can of course also do this manually:

Code:
deb http://download.proxmox.com/debian/ceph-luminous stretch main

or (if you want to test early packages):

Code:
deb http://download.proxmox.com/debian/ceph-luminous stretch test
 
I have a Core i7-7700, ASUS H170-PRO RTL after installing proxmox on debian with xfce4 does not start lightdm, installation, reinstallation does not, and accordingly in containers also x-server does not start. I understand this is a problem with the new hardware, will it work in the 5th version?
 
I have a Core i7-7700, ASUS H170-PRO RTL after installing proxmox on debian with xfce4 does not start lightdm, installation, reinstallation does not, and accordingly in containers also x-server does not start. I understand this is a problem with the new hardware, will it work in the 5th version?

please don't hijack this thread - you already opened one for your issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!