Re: New Mini-itx Proxmox Build
You settled with using an SSD, not a USB pen drive, so I'd say you don't need those tricks. Put the Proxmox volumes/partitons on the SSD. For smaller loads and for testing, at home, etc. you can put your VMs (or some of them) on the SSD too.
In hopes that it might be useful for someone else with an Intel DQ67SW motherboard: as the solution I ended up disabling the IGP in the BIOS. Interestingly enough, I still have text console on either a physically attached monitor or the remote KVM (Intel AMT). So now everything is working as...
I forgot to provide the VM config, here it is:
bootdisk: virtio0
cores: 2
hostpci0: 01:00.0
ide2: none,media=cdrom
memory: 4096
name: host.name.local
net0: virtio=8A:79:69:48:33:2E,bridge=vmbr0
onboot: 1
ostype: l26
sockets: 1
virtio0: local:100/vm-100-disk-1.qcow2,format=qcow2,size=8G
I have a problem starting one of my VMs on a server wit a redirected PCIe card. It was working fine previously. I have tried all workarounds I posted here without success. This seems to be related to this bugreport. Is it possible that the related patch hasn't been incorporated in the PVE repo...
Re: Problem PCIe Passthrough M1015 XPEnology(Nanoboot) boot hang's up.
Just install it with apt-get and select it at boot to test. The one working for me is the -27 or -28 from the 2.6.32 series.
Re: Problem PCIe Passthrough M1015 XPEnology(Nanoboot) boot hang's up.
AFAIK there's no need to use the PCIe features of KVM if the device doesn't require some PCIe-specific function. Although I'm not completely familiar with the technical details of this.
Re: Problem PCIe Passthrough M1015 XPEnology(Nanoboot) boot hang's up.
Just to note: I use the same card as you on 2.6.x without any performance or other issues.
Re: Problem PCIe Passthrough M1015 XPEnology(Nanoboot) boot hang's up.
It's good to know you've resolved it. I'm still using it with an older 2.6.32 kernel since newer ones can't boot on my system and I can't use 3.10 cince I need OVZ. So it seems the solution for you was to update the initramfs...
I'm surprised that no one could answer this. However, what I see from the perl code, offline KVM migration only works with shared storage (of any kind) or local directory-based storage.
I'd be glad if someone from the Proxmox team could confirm this.
Please forgive me if it's documented somewhere but I haven't been able to find it.
I've attempted to migrate KVM VMs from one HN to another via "qm migrate" on the newest PVE 3.3, but this is what I've got:
vm1:~# qm migrate 150 vm2
Oct 11 15:28:02 starting migration of VM 150 to node 'vm2'...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.