Im facing an issue with Windows & Cloud init on templates
Im using the API to clone and configure the VM,
Cloud init wont initiate if i link clone and set config then start. I have to go into the GUI and press Generate Image then it works.
If i full clone this doesn't apply, The Cloud init...
Here is a little script to build out a CentOS 7.4 minimal image to be cloud-init compatible and allow SSH via text password.
Obviously at your own risk, This is for testing purposes.
https://raw.githubusercontent.com/philliplakis/centos7-cloudinit/master/cloudinit.sh
#!/bin/bash
echo...
Your only showing device:
86:00
On our RedHat system with Tesla M10. You dont pass through with device id. I pass through with:
2122657
Output of nvidia-smi vgpu should be :
|===============================+================================+============|
| 0 Tesla M10 |...
Il just add.
I can add to one of the VMs in list but multiple are issuing the error, Its strange. Could a template restore cause this from another node??
Anything via GUI, Hardware / Options / Cloud Init - CI PASSWORD
I can add via Shell no issues, eg.
qm set 9000 --ide2 local-lvm:cloudinit
But if i try to add a Cloud-Init drive via GUI i get the above
All of a sudden i can not add any thing via GUI.
All node but one:
pveversion:
pve-manager/5.3-9/ba817b29 (running kernel: 4.15.18-11-pve)
The other is:
pveversion:
pve-manager/5.3-9/ba817b29 (running kernel: 4.15.17-1-pve)
Im pretty sure NVIDIA needs to write the drivers to support Debian and vGPU, The drivers you are using are from Red Hat KVM.
Whats the ouput from
nvidia-smi vgpu
Yes i can create 4 VMs with a single GPU each running concurrently.
That line appears inside the VM, Which is part of the OVMF bios.
If we could emulate the MMIO area in QEMU it could overcome this
In my VM config i added the following:
machine: q35,max-ram-below-4g=1G
Which in return gives the following QEMU config:
# qm showcmd 152 --pretty
/usr/bin/kvm \
-id 152 \
-name Base-Window10 \
-chardev 'socket,id=qmp,path=/var/run/qemu-server/152.qmp,server,nowait' \
-mon...
I can pass through 4 individual Tesla P100's to 4 VMs but when combining to pass through any number above 1 i get the following error when running - dmesg | grep NVRM
1 of four works, but any amount over 1 the below output is produced.
admin@gpu-host:~$ dmesg | grep NVRM
[ 4.550588] NVRM...
No errors.
The pcie device doesnt show in hardware unless its device-id 83:00
If i pass multiple through including 83:00 only 83:00 shows in the OS, if i remove 83:00 and add any of the others they dont show up at all and i have access through noVNC through the web console. Which doesnt happen...
So i have 4x 1070s in the system, device-ids - 3:00, 4:00, 5:00 & 83:00.
I can pass-through 83:00 without any issues. This is the device id i chose to start the configuration with and since then can pass through another device-id.
If i remove 83:00 and add any other device-id it will not work...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.