Not sure why, but I still needed to add this. Removing the extra arg line brings me back to a Code 43 error state. I do end up with duplicate arguments when I "qm showcmd" but it seems that qemu/kvm parses it correctly enough to make a difference for me. As long as it works I didn't question...
After struggling for a bit, I have it working reliably on my i7-3770 / Z77 / GTX 960 / GTX 730 / Intel Integrated GPU system using the pvetest repos. I pass through the 960 to a Windows 10 VM, and the 730 to an Ubuntu 16.04 Gnome VM. The onboard GPU is used for Proxmox. There was a few tips...
Figured it out.
I launched the migration using the "qm migrate" command and invoked it using the perl debugger. Combined with a breakpoint and some logging statements, this led me to around line 196-197 of /usr/share/perl5/PVE/QemuMigrate.pm...
I have a three node cluster that uses ZFS local storage on each node. For some reason, an error is generated when I try to migrate some VM's to a certain node. The error text looks to be from an uncaught exception that is bubbling up into the output from the migration script. Here's the error...
I know I'm commenting on an old thread, but this thread came up in my Google searches and hopefully this will help someone out in the future...
I had the same mpt bios error as the OP trying to passthough two LSI 9211i (m1015) and one LSI 1068 (br10i) IT mode JBOD controllers. What worked for...
Just wanted to follow up on my Ceph trial.
Looks like Ceph is an amazingly robust piece of technology, but it doesn't perform as well as I'd hoped for small installations. I can see Ceph doing well with a decent number of drives (OSD's) thrown at it, but in my case of sharing 3 drives across 3...
Wasim/Spirit,
I figured it out. For refernce, here's the health of the pool:
root@pve-desktop:/mnt/migrate/images/103# ceph health
HEALTH_OK
root@pve-desktop:/mnt/migrate/images/103# ceph status
cluster c7116adc-4c18-4523-992a-6a85df0985b2
health HEALTH_OK
monmap e5: 3 mons at...
Sorry for the confusion, but I meant Ceph was a new addition to Proxmox.
The disk image is in RAW format, after being converted from QCOW2.
I even created a new test vm of 4 GB in RAW format on a local drive and tried to migrate it to Ceph, but the Proxmox gui doesn't allow me to. For...
I know that the Ceph storage is relatively new, so I'm probably missing something basic, but I can't seem to figure out how to migrate my existing VM's into a new Ceph RBD storage cluster.
I've setup the Ceph storage as per the wiki, and I can create new VM's in the storage pool successfully...
You are absolutely correct. Sorry if I mislead anyone, but I was in a rush to test and post my findings before leaving for the cottage, so I think I may have messed up somewhere.
I tested everything again last night by completely wiping out my Proxmox install and starting over, performing all...
I'm away from my computers (at cottage) so I can't answer for sure until tomorrow, but I can tell you that the pcie_acs_override switch was the only thing I changed and the iommu stuff came alive. I plan on testing further tomorrow when I get back, but maybe spirit (or dietmar) added the patch...
Success!!!
Thanks spirit! :D
I ran an apt-get update, apt-get upgrade and apt-get dist-upgrade. Once updated, the vga=none was correct out of the box, so I no longer needed to use a bash script to launch. However, when launching the vm I still got the vfio errors:
kvm: -device...
It breaks the setup. When applied I can no longer start the VM's because of a perl compilation error.
root@pve-desktop:~# dpkg -i qemu-server_3.0-18_amd64.deb
dpkg: warning: downgrading qemu-server from 3.1-30 to 3.0-18
(Reading database ... 36647 files and directories currently installed.)...
No.
As I detailed in a post above, I need that step or for some reason it doesn't work.
I'm also noticing that the Radeon card does not reset when the VM is stopped. I killed the vm (qm stop) and I still have the Windows Recovery screen painted on the passthrough monitor. I know there is bus...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.