I don't have one & brief googling (which I'm sure you did too) shows the drivers have been available since last year. There was a write performance issue fixed with a firmware update to the Shared PERC8.
I"m curious where an application like this would be attractive. Cluster-in-a-box...
Re: multiple clusters on a network.
It uses different multicast groups based on the cluster name. If you tried creating different clusters with the same name, you'd have a problem. Otherwise it should be fine as long as multicast is allowed on your switches.
Can confirm that simply adding "driver=vfio" fixed this for me. I think I tried to add too many options (machine:q35, pcie=1) the first time around. Not sure what the q35 machine stands for, but my PCI card is definitely PCI-E. Should I try to get the pci-e option working? Any benefits?
I had passthrough working on PVE4 Beta1 thanks to adding pci_stub to /etc/modules per http://forum.proxmox.com/threads/22850-Issue-Can-t-PCI-Passthrough-on-Proxmox-4-0-Beta1
Now I'm running into a different issue with Beta2 after I upgraded. As far as I can tell, everything it the same...
We are in the process of building a new cluster that's not yet in production. We're trying to hold off on going live with it until v4 is released since it will contain several features we will want to use. Any idea when that may be? Ballpark estimates would be great.
There are probably issues with using Debian 8 in a CT. CT's use Proxmox's kernel, which is 2.6 version. Debian 8 switched to systemd, which relies on kernel 3.x. I've read it'll be possible by switching to upstart, but it might not be easy.
In my homelab where I have a nested PVE cluster, migration of a container from one node to the other interrupts the network briefly for the container. I would guess maybe 15 pings worth. I wonder how much of the delay is due to the slowness of my setup (which is to be expected) & how much the...
This has been a common question, which I know because I myself have searched for a solution without much luck. It's known that Proxmox Cluster requires SSH to be on port 22 to work & it doesn't support changing it as far as I'm aware. Certain circumstances dictate this might be an issue for a...
This helped me. I was able to download the deb from https://packages.debian.org/wheezy-backports/amd64/qemu-utils/download, extract it to some arbitrary location & run the qemu-img binary from there:
# wget...
Sorry if this has been posted & answered before. I did search & couldn't find this issue exactly.
Using Oracle Java v7u55 (and a few versions prior) on OS X, I keep having the following issues:
Sometimes keys will get stuck in repeat. Could be a letter or an arrow key.
Sending some certain...
I think any type of kvm passthrough is going to break Illumos guests. I've tried multiple types of solaris guests going back to opensolaris, including NexentaStor, & all had the same result.
That solution is in the back of my mind, but I would rather keep at trying to fix this for the time being. I would use it for a lot more than just storage for the host... AFP/SMB shares to other computers in the house for example. Not sure if I want to extend Proxmox's role beyond a hypervisor.
Boot to the host itself (proxmox host) to where you can see the POST & BIOS screens. You should get a screen that shows the LSI controller, from there you can hit ctrl+c to go into it's configurations. Disabling it's boot option is in the advanced settings I believe... it's there, you just...
I'm having a similar problem trying to passthrough my LSI card to a Illumos VM. I got those errors on boot of the VM as well, but I'm not trying to boot from the HBA so I disabled booting in it & the errors went away.
My problems with Illumos not liking the passthrough LSI card still persist...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.