Note: This is the last GVT-g community release based on old architecture design. After that, our next community release will switch to new code repository and new upstream architecture which has been upstreamed in kernel 4.10.
“Upstream” version GVT-g
First public release date Feb. 2017
Last public release date ---
qemu-system-x86_64 -m 2048 -smp 4 -M pc -name VM1 -hda /home/img/win7-64-base.img -enable-kvm -net nic -net tap,script=/etc/qemu-ifup -vgt -vga vgt -machine kernel_irqchip=on -vgt_high_gm_sz 384 -vgt_fence_sz 4 -vgt_low_gm_sz 128 -cpu host -net nic,model=e1000,macaddr=00:DE:EF:12:34:5C –cdrom /home/img/win7-64.iso
well we have not integrated anything yet, but the basic pieces are thereDoes Proxmox VE 5.0 beta 2 finally support Intel's GVT-D or KVMGT?
this only works with the kvmgt fork of qemu at https://github.com/01org/KVMGT-qemu not with upstream-vgt -vga vgt -vgt_high_gm_sz 384 -vgt_fence_sz 4 -vgt_low_gm_sz 128
it has (it "works" too, see the link in my previous post), but the special "vgt" device in qemu is not upstream (i only found it in the kvmgt fork)I thought Kernel 4.10 already has kvmgt and qemu 2.9 already has gvt-g codes
extracting archive '/var/lib/vz/template/cache/debian-8-turnkey-fileserver_14.1-1_amd64.tar.gz' tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation not permitted tar: ./var/spool/postfix/dev/random: Cannot mknod: Operation not permitted Total bytes read: 497704960 (475MiB, 101MiB/s) tar: Exiting with failure status due to previous errors TASK ERROR: command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf /var/lib/vz/template/cache/debian-8-turnkey-fileserver_14.1-1_amd64.tar.gz --totals --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-xattr-write' -C /var/lib/lxc/101/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2
Yes, I meant LXC container, not full VM (they're all VMs to me, just with different approaches and limitations).you talk here not about a VM, you talk about a LXC container template.
do you try to deploy the CT unprivileged? TKL templates has to run privileged.
Just installed new PVE 5.0 beta2 cluster with Ceph storage. Works good.
You may want add in pveceph deploy scripts 'caps mgr = "allow *"' to client.admin keyring to adapt it to Ceph Luminous, as stated in this bugreport tracker.ceph.com/issues/20296. It permits using 'ceph pg' commands.
And maybe add optional second network to '-network' option of 'pveceph init' command. According to documentation it is a good practice to separate cluster and public Ceph traffic (docs.ceph.com/docs/master/rados/configuration/network-config-ref/).
It is easy to reproduce. Just deploy PVE 5.0 and Ceph server 12.03 by official Proxmox tutorial or video tutorial and run "ceph pg dump". You will reseive error "Error EACCES: access denied"I never ran into this problem?
did it, Bug 1430please file an enhancement request at bugzilla.proxmox.com
After clean install with "pveceph init" it didn't work, bacause admin caps were"ceph pg dump" works here (on several 5.x test clusters).
client.admin key: c29tZWtleXNvbWVrZXlzb21la2V5c29tZWtleQo= auid: 0 caps: [mds] allow caps: [mon] allow * caps: [osd] allow *
ceph auth caps client.admin osd 'allow *' mon 'allow *' mds 'allow' mgr 'allow *'
896 run_command("ceph-authtool $pve_mon_key_path.tmp -n client.admin --set-uid=0 " . 897 "--cap mds 'allow' " . 898 "--cap osd 'allow *' " . 899 "--cap mon 'allow *'");
Waiting for it too. But let's not be too hasty. Better later and rock-stable...As stretch is released, do you have an updated ETA for pve 5?
CPU: AMD Ryzen 7 1700
MB: ASRock AB350M PRO4
Patriot Hellfire m.2 NVMe 480GB
ADATA Ultimate SU800 512GB
2x Seagate IronWolf 10TB
+ some cheap graphics card