I am pretty sure the M1015s need a x8 slot, never tried them in a x4 slot. Interesting that you got some response from it.
I also don't think that they are UEFI compatible if that is worth anything related to this.
It is odd that PVE uses the mpt3sas module but OMV shows it using the mpt2sas...
The mvsas is for Marvel controllers, my Supermicro ones have Marvel controllers. Sorry if that was confusing, posted that very late.
From what I understand from the lspci-k output is that your M1015 is using the mpt3sas driver, so that is the one you should blacklist. But I would blacklist...
@mkyb14 so it seems my notes weren't that good, probably because I was doing it as a test while I waited for PVE 5.0 to be officially released.
The system (old FreeNAS server) that will run my OMV VM is:
CPU - Xeon 1230v2
Motherboard - Supermicro X9SCM-F-O
Memory - 32GB Crucial 1600mhz ECC
2 x...
@mkyb14
I've gotten two Supermicro HBAs to passthrough to OMV, I didn't have a lot of luck with the IBM (M1015 and M1115) or HighPoint HBAs I have.
After a lot of fussing with the VM config settings, simply swapping HBAs did the trick for me. FYI, this is running on a older Supermicro...
sdinet,
Not sure what that has to do with having an OSSIM Virtual Appliance for Proxmox.
Having an OSSIM Virtual Appliance for Proxmox would make it very easy to install a OSSIM VM and then use OSSEC across all VMs in a cluster.
I second the request for OSSIM.
Have had nothing but problems recently with the ISO from AlienVault, usually some kernel panic, or other nonsense during install.
My apologies if this is covered in another thread somewhere, I did search.
I am researching a way of easily connecting server VMs on one node to client VMs on another when using Open vSwitch. I was able to find instructions for connecting two OVS bridges using patch ports, but this seemed for...
maxprox,
I am still testing a similar set up using dm-cache.
I found the first link you posted, but also used
http://blog-vpodzime.rhcloud.com/?p=45
and it helped me a bit to understand it all.
Connected to a IBM M5015 RAID controller (onboard cache disabled):
2x HGST Travelstar...
ZFS was designed to cover the benefits you discussed and more.
It isn't for everyone in every situation, but in some situations it can outperform and overly protect data compared to traditional RAID and filesystems. Just like many things, there are Pros and Cons, it is really up to you if you...
Silly question but is your host connected to the internet?
If it is not, or there is an issue with connecting to the internet, then 'apt-get' will not work at all.
From the console can you ping 8.8.8.8? (google dns)
Have you attempted to just do a "update"
apt-get update
If you get any...
I believe I am at the point where I am done doing basic testing/"tinkering" Proxmox and I really like it.
All my hardware works well and is fast, clustering works good, PCI-e and USB passthrough is awesome, OVS is working ok.
I basically just want a 3 node cluster for management purposes, each...
I am still using pve-kernel 4.1.3 without passthrough issues, at least I haven't noticed any.
With this kernel I think you still have to specify "driver=vfio" in the guest.conf though.
Not sure if the grouping is messed up or not, but I am passing through 2 USB devices, and SSD using onboard...
Update:
I have reinstalled both nodes with the Proxmox VE 4.1 released 11 December 2015.
The pvecm create proxcluster -bindnet0_addr 10.10.10.251 -ring0_addr one-corosync command works now but there seems to be a problem creating the cluster and it fails to start corosync.
When I get some...
Dietmar, thank you for your reply.
Unfortunately, I cannot use apt-get as explained in my first post. I must run the hosts in the cluster offline.
I can and have used dpkg to update a few packages and tested different kernels by downloading them straight from the repository.
The error...
Hello and thank you in advance for any assistance.
I am attempting to set up a 2 node cluster in an offline test environment and seem to be running into an issue with the command to create the cluster in the Separate Cluster Network Wiki page.
I am using two freshly installed nodes, running...
I can confirm the same issues with PCI pass-through, I noticed the bug when VMs hung with 8GB of ram.
I downgraded and everything is working good so far, believe I switched to kernel 4.1.3-7. No slowness and pass-through for USB and Radeon 270X is working like a champ.
Personally because the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.