Proxmox VE 5.0 beta2 released!

It is always good to have the latest cpu-microcode installed, true. But this seems to me more like problem with proper hardware detection, or device-names. That nvme-disk is first correctly detected as /dev/nvme0n1, but why are there already 3 partitions (p1,p2,p3)?

I'd recommend to remove all paritions and revert ssd to factory-default state with "secure erase" (i.e. hdparm).
 
@Rhinox Doesn't matter if the drive is clean or not. The errors are still there. I just haven't removed them between each try.
 
The only error on the 1st screenshot clearly states it can not write to /dev/nvme0n1p1, that is the 1st partition on nvme0n1. Maybe that orphan partition (created with PVE4.4) is somehow "not compatible" with PVE5.0b2 installer. Sure it is not needed, because PVE5.0b2 installer tries to wipe out all 3 partitions (but succeeds only for 2nd and 3rd)...

So I'd say it does have something to do with partitioning of the nvme-device. Appart from that, there are only some warnings concerning LVM (lvmetad) which was probably not startet...
 
Waiting for it too. But let's not be too hasty. Better later and rock-stable...

I hope PVE 5.0 "final" will include the latest Intel microcode update (although Debian can not include it in installation medium due to licensing issues). Nearly all Skylake/Kabylake CPUs (incl. xeon E3) have bug in hyperthreading which can lead to serious problems, i.e. data corruption/loss...

True, but as I'm planning a new server, if it's a matter of months (1 or 2 maximum), i can wait for the release.

It's much better to start with a clean system than starting with 4.4 and the upgrade to 5.0 after some weeks, also to avoid downtime.
 
The only error on the 1st screenshot clearly states it can not write to /dev/nvme0n1p1, that is the 1st partition on nvme0n1. Maybe that orphan partition (created with PVE4.4) is somehow "not compatible" with PVE5.0b2 installer. Sure it is not needed, because PVE5.0b2 installer tries to wipe out all 3 partitions (but succeeds only for 2nd and 3rd)...

So I'd say it does have something to do with partitioning of the nvme-device. Appart from that, there are only some warnings concerning LVM (lvmetad) which was probably not startet...

Since I first tried to install 5.5b2 (on a virgin drive) and it came with exactly the same errors I don't think this matters. The errors come up after it makes the partitions.
 
Would Proxmox please consider providing a cephfs storage mode in /etc/pve/storage.cfg, to define a dir as a shared resource? We are using CephFS to access ISO images, having simply mounted it as /var/lib/vz.

NB: Performance of CephFS would not make this ideal, if at all usable, for image or backup storage.

We have 5 nodes (kvm5a/b/c/d/e) of which the first 3 operate as Ceph monitors (kvm5a/b/c). Herewith the steps to provide replicated file storage on all nodes:

Install Ceph MDS binaries on nodes running as monitors:
Code:
apt-get -y install ceph-mds;

Edit Ceph configuration file (vi /etc/ceph/ceph.conf) and define nodes running as monitors as CephFS gateways (active/failover):
Code:
[mds]
     mds data = /var/lib/ceph/mds/$cluster-$id
     keyring = /var/lib/ceph/mds/$cluster-$id/keyring
[mds.kvm5a]
     host = kvm5a
[mds.kvm5b]
     host = kvm5b
[mds.kvm5c]
     host = kvm5c

Run the following on nodes running as monitors (change 'id' to match node's name):
Code:
id='kvm5a';
mkdir -p /var/lib/ceph/mds/ceph-$id;
ceph auth get-or-create mds.$id mds 'allow ' osd 'allow *' mon 'allow rwx' > /var/lib/ceph/mds/ceph-$id/keyring;
chown ceph.ceph /var/lib/ceph/mds -R;
systemctl enable ceph-mds@$id;
systemctl start ceph-mds@$id;
systemctl status ceph-mds@$id;

Create CephFS pools (we set number of placement groups as 2 x cluster OSD count):
Code:
ceph osd pool create cephfs_data 40;
ceph osd pool create cephfs_metadata 40;
ceph fs new cephfs cephfs_metadata cephfs_data;

Confirm that everything is healthy and that you don't now have too many placement groups:
Code:
ceph -s

PS: Expect to see something like 'fsmap e7: 1/1/1 up {0=kvm5b=up:active}, 2 up:standby'

Ceph nodes not running as monitors would need to have the following binaries installed (they are automatically included as dependencies of ceph-mds on the nodes running as monitors):
Code:
apt-get -y install ceph-fuse

Lastly configure file system table (vi /etc/fstab) to mount CephFS volume:
Code:
id=admin,conf=/etc/ceph/ceph.conf /var/lib/vz fuse.ceph defaults,_netdev,noauto,nonempty,x-systemd.requires=ceph.target,x-systemd.automount 0 0

PS: You will probably want to mount this as something like '/mnt' first to test and to then copy the content of the existing '/var/lib/vz' folder (ie 'rsync -aHvx --delete /var/lib/vz/ /mnt/') before unmounting /var/lib/vz and remounting it using CephFS.


Note:
Ceph MDS is not active/active so failure of the current active MDS master (eg rebooting) results in the current master timing out, prior to a new master being elected. No manual commands need to be run to initiate this process and it therefor behaves like a fully redundant and replicated shared file system.

For reference purposes, herewith our Proxmox storage configuration file (/etc/pve/storage.cfg):
Code:
dir: local
        path /var/lib/vz
        maxfiles 0
        content vztmpl,backup,iso,rootdir

rbd: virtuals
        monhost 10.254.1.3;10.254.1.4;10.254.1.5
        content images,rootdir
        pool rbd
        krbd 1
        username admin

NB: Having a guest's cdrom attached to an ISO image prevents live migration. We're essentially asking Proxmox 5 to include functionality to tell Proxmox that a given directory is shared storage (should also work when mounting a directory using Samba) to allow live migrations of guests which have mapped ISOs...
 
Last edited:
  • Like
Reactions: Jarek and chrone
By sincere appreciation to the Proxmox team before anything else. I am posting a couple of issues that could be only applicable to my set up but I am posting it in case there is a known issue:
1. The SSH using putty to the Proxmox server is quite slow and times out. This might be because of my ethernet port and switching of course but if others have this issue please post to the team.
2. I tried to install a KVM based ISO and got errors but when I tried on a Proxmox 3.4 no problem. Let me know if you need more specifics.

Again my observation could only specifics to my scenario and I am just posting to see if others had this issue?

Regards,
 
well we have not integrated anything yet, but the basic pieces are there


this only works with the kvmgt fork of qemu at github.com/01org/KVMGT-qemu not with upstream

i tested it here a bit, but ran into several problems (journal running full of errors, hangs/crashes)
i followed the following site and it mostly worked
github.com/01org/gvt-linux/wiki/GVTg_Setup_Guide

i just ignored all the compile steps and added the
Code:
   -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/<UUID>,rombar=0
part under args of my vm config

edit: typo

@dcsapak, this is the most recent thread related to this that I could find - apologies for responding to an old thread.

I've been trying to figure out how to make this work in my proxmox 5.1 system but so far have not had any luck. Like you stated, simply starting the VM with the appropriate device vfio-pci,sysfsdev setting (after doing the other pre work outlined in the GVTg setup guide) causes GPU reset errors and an otherwise unusable VM.

I went as far as compiling the qemu server from the igvtg-qemu repo, manually installing it and the related libs into the proxmox system itself, but it didn't appear to be compatible with the way proxmox runs the qemu server.

Is there any plan to incorporate this functionality into the built-in qemu/kvm system of proxmox? I would really like to have a path forward to turning on intel iGPU hardware acceleration for my windows-based VM. This seems like the only approach that should work.
 
Is there any plan to incorporate this functionality into the built-in qemu/kvm system of proxmox? I would really like to have a path forward to turning on intel iGPU hardware acceleration for my windows-based VM. This seems like the only approach that should work.
i test it in regular intervals ( mostly on big kernel/qemu jumps ) to see if it is stable. once it runs reasonably error-free i want to begin to integrate it, no promises though ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!