The trick it seems is to manually add the usb devices. For eg:
args: -device nec-usb-xhci,id=xhci -device usb-host,bus=xhci.0,hostbus=4,hostport=2
Omitting bus and addr seems to allow the nec-usb-xhci driver to assign to pcie.
That said, my USB devices still don't like this and I'm getting...
So it seems that the VGA passthrough is the issue. For that to work I need:
hostpci0: 01:00,pcie=1,x-vga=on
Which in turn needs:
machine: q35
And apparently nec-usb-xhci doesn't work with q35, as it is attached to a PCI slot instead of a PCIE slot (?)
Any tips on how to fix this? The issue...
I'm trying to pass through a USB3 device as per:
https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines
As I am running PVE5.0, I understand that all I should need to do is set usb3=yes, so something like this:
usb0: host:4-1,usb3=yes
However this doesn't work. I do see a new xhci...
Say I have 5 containers each running an application. For each I have bind mounted a directory:
/mnt/container_data/container1
/mnt/container_data/container2
etc
Where the application will contain all its application data.
I now create a further container running a backup application. To that...
I've not tried AMD myself, and although my instructions do refer to AMD (eg when blacklisting) they are by no means generic. That said, I don't see why they shouldn't work without some tweaking. Try messing with the bios of the host/guest, and check that your card is UEFI?
I used Unigine Heaven...
Thanks guletz, that worked a treat.
This has opened up a bit of a rabbit hole for me I'm afraid! On reading, it really does seem that I don't want a cluster at all really - both nodes should/would run independently if they want to. If I turn two_node on, but leave wait_for_all as true, what...
Thanks Denny. I went ahead and bit the bullet and things seem to be going well. I did notice that all my local lvm-thin volumes from node1 now have (inactive) entries on node 2. Why is that? The two nodes will have different storage HW and configuration so this seems a bit superfluous.
OTOH...
Thanks Mitch. That's kind of why I have zero expectation of HA / live migration facilities - I know my HW isn't up to scratch. I have no problem with maintenance downtime either.
My question is whether creating a trivial cluster has any benefits - whether that's the ability to manage the two...
Hmm. I think what I need is some kind of feature matrix comparing:
2 standalone nodes.
2 nodes as a cluster.
2 nodes + a "dummy" third node.
I suspect I'm not quite after the features the third solution will give me (afaik: HA and live migration), so my question is: is the cheap (ie software...
Also to add: would I be able to shut down one node without affecting the other? As I said I'm not too interested in the HA side of things (can HA be disabled?)
I have two nodes running at home. They each use local storage for images (backup, HA or fast migration aren't really necessary here), with each guests "manually" accessing a NAS for non transient data.
I'm looking at running the two nodes in a cluster, if only to manage them from a single web...
Thank you for the prompt reply. I'm not sure if its worth documenting this?
Also is there a reason why ext4 was chosen? The same searches above seemed to indicate that ext3 was a better choice...
According to the documentation I've read and searched for, ext3 is supposedly the default filesystem for proxmox installations. However on installing 4.4-1, it seems that it actually uses ext4.
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 10240...
Aim:
To host a headless VM with full access to a modern GPU, in order to stream games from.
Assumptions:
Recent CPU and motherboard that supports VT-d, interrupt mapping.
Recent GPU that has a UEFI bios.
Instructions:
1) Enable in BIOS: UEFI, VT-d, Multi-monitor mode
This is done via the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.