I've been battling with the same errors, trying to passthrough my onboard LSI SAS2116 Non-RAID HBA with 6 x IronWolf Pro 6TB connected.
reading the ProxMox VE passthrough docs and all possible stuff on the forum, no luck.
until ....
I've found the following page ...
please try 'ifconfig eth0 up' first.
if that doesn't work...
Send the output from the following commands :
lspci | grep Ethernet
dmesg | grep eth
Also, make sure that all ethernet devices are enabled in the systems BIOS.
Should the system only report eth2 & eth3, these can be renamed to eth0...
Not sure which Hardware RAID card you are using, but, if the 'windows' software uses LSI 'MegaRAID Storage Management' (MSM) software, this runs perfectly on Linux as well ...
I've installed it onto the ProxMox VE host itself using alien to convert the rpm's to deb packages.
Make sure the...
I've never tried PCI Passthrough, but, I'm using a LSI 9260 (2208 chip) using the ProxMox kernel LSI driver to handle the card / drives.
The Firmware is in standard LSI RAID mode.
Then I export a RAID device to the KVM as a 'scsi' device (example : scsi0 : /dev/sdc )
This works quite well...
Hi Antonio,
Checking your config for the vm, it looks like you've configured 120 Vcpu's ?
AFAIK the number of cores is multiplied by the number of sockets you configure, this may perhaps be the cause of some scheduling issue ?
All,
I'm currently running 14 local drives in my ProxMox VE system, they are running of a 16 port HighPoint RR2740 .
While this is 'fakeRAID' in the strickest sence, it is RAID0/1/10/5/50/6/60 using the HighPoint driver and webgui.
Its what HighPoint calls 'hardware assisted RAID'.
Problem is...
I'm currently running the SuperMicro A1SAM-2750F motherboard, with the same C2750 / I354 interfaces.
Not seeing the issues here ?
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Chello (80.56.45.9)...
Selecting best server based on latency...
I'm running a VM as a fileserver which has a large host disk device exported like "scsi0 : /dev/sdd1" which is currently 18.5 TiB.
(6x 4TB RAID-5). I wouldn't worry about 'too' big ... You should of course be running a filesystem that can handle devices that big though. (I'm currently running...
If you are using RDP (like using Terminal Server suggests) that is not depended upon the physical servers' graphics performance, since its a network protocol.
If you are refering to the performance of the console via the browser based ProxMox frontend, then you should be using SPICE.
For any serious production database, I'd only trust local storage (or iSCSI/ Fiber multipathed SAN), NFS is indeed simply not stable enough.
It can become unavailable, which will cause hanging I/O's , and timeouts. Also, the database relies on fsync's to keep the WAL logging and the database...
Regarding question 1, there is no way for a single VM to run on multiple nodes to utilize the resources from a cluster of nodes.
Like Mo_ said, you need the application to be written to use MPI in order to have the workload distributed over multiple nodes.
VMs are supposed to be 'smaller' than...
Yup, like spirit suggested, the user that you are using to run your log server application needs to be listed in the /etc/security/limits.conf with a large enough 'nofile' soft and hard limit.
Otherwise it uses the system default, which is 1024.
so, lets say the application runs under a user...
The RAID device is not 'mountable' on the ProxMox VE server, its mounted in a VM,
where pveperf doesn't work correctly, so a 'dd' test is used in the VM.
Intel(R) Atom(TM) CPU C2750 @ 2.41GHz (OctaCore) + HighPoint RR2720 (SATA/SAS 6Gbps) + 6 WD Reds 4TB in RAID-5 ( Half-fake-RAID )...
As far as I know, this does not have a direct consequence for the KSM and caching, but, should the machine be 'in need' of free memory, setting to dynamic enables the 'balloon' driver which should be installed seperately in Windows, (virtio-drivers pack).
This allows the VM to free up unused...
Hi Fransisco,
Its not that evident how much memory will be free. Proxmox VE (Linux Kernel) has several very nice features which can save memory, as well as dynamic filesystem caching which will show up as 'cache' and thus 'used' memory.
Having 20 VMs with 1 GB each, will result in more than...
Quorum is not a shared state on the cluster configuration, it is a 'shared' disk that allows a cluster to have more 'votes'.
Ever single server within a cluster is seen as having a single vote to make sure a cluster is 'quorate' enough to be considered 'alive' to start services (read: VMs)
In a...
Re: New Mini-itx Proxmox Build
The PSU is quite enough, since my Rangley board with a RAID controller and 10 Harddrives draws just about 70Watts idle and 80 to 85Watts at load.
However, the M.2 needs a PCIe adapter to work I guess...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.