Proxmox VE 5.0 beta2 released!

Does Proxmox VE 5.0 beta 2 finally support Intel's GVT-D or KVMGT?

according to
https://01.org/igvt-g/blogs/wangbo85/2017/intel-gvt-g-kvmgt-public-release-q42016

Note: This is the last GVT-g community release based on old architecture design. After that, our next community release will switch to new code repository and new upstream architecture which has been upstreamed in kernel 4.10.


“Upstream” version GVT-g

First public release date Feb. 2017

Last public release date ---

First Version

Kernel: 4.10

Xen: 4.7

QEMU: 2.8.50

How do I use GVT-g in PVE 5.0 beta 2?

I can see kvmgt.ko @

Code:
/lib/modules/4.10.15-1-pve/kernel/drivers/gpu/drm/i915/gvt/kvmgt.ko

how do I add following of the switch/option to windows guest vm?

Code:
qemu-system-x86_64 -m 2048 -smp 4 -M pc -name VM1 -hda /home/img/win7-64-base.img -enable-kvm -net nic -net tap,script=/etc/qemu-ifup -vgt -vga vgt -machine kernel_irqchip=on -vgt_high_gm_sz 384 -vgt_fence_sz 4 -vgt_low_gm_sz 128  -cpu host -net nic,model=e1000,macaddr=00:DE:EF:12:34:5C –cdrom /home/img/win7-64.iso
-vgt -vga vgt -vgt_high_gm_sz 384 -vgt_fence_sz 4 -vgt_low_gm_sz 128
 
Last edited:
Does Proxmox VE 5.0 beta 2 finally support Intel's GVT-D or KVMGT?
well we have not integrated anything yet, but the basic pieces are there

-vgt -vga vgt -vgt_high_gm_sz 384 -vgt_fence_sz 4 -vgt_low_gm_sz 128
this only works with the kvmgt fork of qemu at https://github.com/01org/KVMGT-qemu not with upstream

i tested it here a bit, but ran into several problems (journal running full of errors, hangs/crashes)
i followed the following site and it mostly worked
https://github.com/01org/gvt-linux/wiki/GVTg_Setup_Guide

i just ignored all the compile steps and added the
Code:
   -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/<UUID>,rombar=0
part under args of my vm config

edit: typo
 
I have tried to create a VM using one of the turnkey templates with the following error:

Code:
extracting archive '/var/lib/vz/template/cache/debian-8-turnkey-fileserver_14.1-1_amd64.tar.gz'
tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted
tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted
Total bytes read: 497704960 (475MiB, 101MiB/s)
tar: Exiting with failure status due to previous errors
TASK ERROR: command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf /var/lib/vz/template/cache/debian-8-turnkey-fileserver_14.1-1_amd64.tar.gz --totals --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-xattr-write' -C /var/lib/lxc/101/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2

Are these templates only for PVE 4.x, or am I doing something wrong?
 
you talk here not about a VM, you talk about a LXC container template.

do you try to deploy the CT unprivileged? TKL templates has to run privileged.
 
you talk here not about a VM, you talk about a LXC container template.

do you try to deploy the CT unprivileged? TKL templates has to run privileged.
Yes, I meant LXC container, not full VM (they're all VMs to me, just with different approaches and limitations).
I have indeed tried to set it up as unprivileged. Can you tell me briefly why they have to be privileged, or if it's too much to get into in this thread, point me to some online resources where I can read about this?

Thank you.
 
Just installed new PVE 5.0 beta2 cluster with Ceph storage. Works good.

You may want add in pveceph deploy scripts 'caps mgr = "allow *"' to client.admin keyring to adapt it to Ceph Luminous, as stated in this bugreport tracker.ceph.com/issues/20296. It permits using 'ceph pg' commands.

And maybe add optional second network to '-network' option of 'pveceph init' command. According to documentation it is a good practice to separate cluster and public Ceph traffic (docs.ceph.com/docs/master/rados/configuration/network-config-ref/).
 
Just installed new PVE 5.0 beta2 cluster with Ceph storage. Works good.

You may want add in pveceph deploy scripts 'caps mgr = "allow *"' to client.admin keyring to adapt it to Ceph Luminous, as stated in this bugreport tracker.ceph.com/issues/20296. It permits using 'ceph pg' commands.

I never ran into this problem?

And maybe add optional second network to '-network' option of 'pveceph init' command. According to documentation it is a good practice to separate cluster and public Ceph traffic (docs.ceph.com/docs/master/rados/configuration/network-config-ref/).

that might make sense - please file an enhancement request at bugzilla.proxmox.com
 
It is easy to reproduce. Just deploy PVE 5.0 and Ceph server 12.03 by official Proxmox tutorial or video tutorial and run "ceph pg dump". You will reseive error "Error EACCES: access denied"


did it, Bug 1430

"ceph pg dump" works here (on several 5.x test clusters).
 
"ceph pg dump" works here (on several 5.x test clusters).
After clean install with "pveceph init" it didn't work, bacause admin caps were
Code:
client.admin
    key: c29tZWtleXNvbWVrZXlzb21la2V5c29tZWtleQo=
    auid: 0
    caps: [mds] allow
    caps: [mon] allow *
    caps: [osd] allow *
But in Ceph 12 "caps: [mgr] allow *" also needed. Maybe in your installation caps are already fixed or Ceph installed different way.

For me 'ceph pg' started to work only after changing permissions:
Code:
ceph auth caps client.admin osd 'allow *' mon 'allow *' mds 'allow' mgr 'allow *'

Updated:
BTW problem is in file /usr/share/perl5/PVE/API2/Ceph.pm
Code:
    896                 run_command("ceph-authtool $pve_mon_key_path.tmp -n client.admin --set-uid=0 " .
    897                             "--cap mds 'allow' " .
    898                             "--cap osd 'allow *' " .
    899                             "--cap mon 'allow *'");
 
Last edited:
As stretch is released, do you have an updated ETA for pve 5?
Waiting for it too. But let's not be too hasty. Better later and rock-stable...

I hope PVE 5.0 "final" will include the latest Intel microcode update (although Debian can not include it in installation medium due to licensing issues). Nearly all Skylake/Kabylake CPUs (incl. xeon E3) have bug in hyperthreading which can lead to serious problems, i.e. data corruption/loss...
 
  • Like
Reactions: Chicken76
Hi,

I just bought a new rig for use with PVE.
The specs are:
CPU: AMD Ryzen 7 1700
MB: ASRock AB350M PRO4
RAM: 64GB
DRIVES:
Patriot Hellfire m.2 NVMe 480GB
ADATA Ultimate SU800 512GB
2x Seagate IronWolf 10TB
+ some cheap graphics card

I'm having problems installing PVE 5.0b2 on my NVMe drive (PVE 4.4 installs fine on this drive).
Those are the problems the installers having trying to install on this drive:
Standard install:
20170628_011359.jpg
Debug install:
20170628_010855_HDR.jpg

I can install PVE 5.5b2 on the ADATA drive, but that's not the OS drive.
Any help ?
 
Just guessing: CPU Ryzen & chipset B350 is still quite new HW, so maybe it is not yet fully supported by Linux...
 
@Rhinox "PVE 4.4 installs fine on this drive" - so it's more probable that it's some kind of 5.0 installer problem.

@wolfgang Thank you for that. Yes I do have the latest FW installed. I'll try to install that microcode, but that still wont help with the installer.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!