answering my own question
/etc/default/grub
add iommu_intel = on, i915.enable_gvt=1 to GRUB_CMDLINE_LINUX_DEFAULT
add kvm.ignore_msrs=1 in grub or /etc/modprobe.d/kvm.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
if you want to run gpu benchmark test
you will see...
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.3
any guides on how to make kvmgt GVT-g works? the only info I am able to find is at https://youtu.be/0YyMTg9qc74?t=300
my mDev Did not show
I have Xeon CPU E3-1585 v5 @ 3.50GHz with Iris Pro 580 GPU and it supports gvt-g
if you setup three node of pve cluster and capable of Live Migration between them, they you need to buy license on all three nodes with correct amount of physical core count.
MS license is very tricky. if you do not create cluster and just use pvesync to each nodes, it consider as cold backup...
You have to buy license for all physical core for windows 2016 standard server, please read ms server license, it states pretty clear about virtualization counts physical core of that hypervisor host and not how many core the vm guest has assigned
/sys/kernel/iommu_groups/15/devices/0000:01:00.0
/sys/kernel/iommu_groups/15/devices/0000:01:00.1
/sys/kernel/iommu_groups/16/devices/0000:02:00.0
/sys/kernel/iommu_groups/16/devices/0000:02:00.1
you can't just pass a single port to your vm but you need to pass 2 port due to they are in the...
that info is not up to date
https://docs.microsoft.com/en-us/windows/deployment/vda-subscription-activation
requires windows 10 vda must connect to a MS AD or azure AD, unless you are Qualified Multitenant Hoster which 99% of us are not
@Alwin
it is only testing in lab, when client set to Luminous
ceph osd set-require-min-compat-client luminous --yes-i-really-mean-it
monclient: hunting for new mon
2018-07-06 15:27:59.835155 7f14bc69d700 0 will not decode message of type 41 version 4 because compat_version 4 > supported...
I can confirm pve's luminous client will not work with ceph 13 mimic, that's why I went through all the trouble to find upgrade path to ceph mimic
wget -q -O- 'https://static.croit.io/keys/release.asc' | apt-key add -
echo 'deb https://static.croit.io/debian-mimic/ stretch main' >>...
I am running PVE 5.2 with external Ceph Mimic Cluster and you need to upgrade librbd to 13.2.0-1 (ceph mimic) or otherwise it will fail to connect.
root@pve1:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve)
pve-manager: 5.2-3 (running version: 5.2-3/785ba980)
pve-kernel-4.15...
I have an external ceph storage cluster and its /etc/ceph/ceph.conf has following options with sets auth =none
[global]
auth client required = none
auth cluster required = none
auth service required = none
.......
my storage configuration Example for a...
well... there is always zfs rollback if anything goes wrong...
Currently I turn off RDMA over Ceph already. It is because when memory set to infinite, the monitor will run out of memory and hang periodically, and the only way to avoid that is to cronjob restart the service which in my mind is...
currently my test lab switch out all the ixgbe (intel x520-da2 and x550) and replaced with Mellanox NIC for better driver support. Intel has too many packet loss and super high latency.
Your method works great and much more elegantly done. No error message at all (but uninstalling the whole pve is a bit Nerve-racking
Device #1:
----------
Device Type: ConnectX3
Part Number: MCX354A-FCB_A2-A5
Description: ConnectX-3 VPI adapter card; dual-port QSFP; FDR IB...
rbd map command does not work under ceph over rdma (which is pve default use to load ceph block device into kernel),
you need to use rbd nbd map command in order to mount ceph block storage..
.
this is really going into ceph territories and not what PVE can provide.
I would suggest check up...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.