Got it figured out The following will list out all parameters for all modules currently loaded:
cat /proc/modules | cut -f 1 -d " " | while read module; do echo "Module: $module"; if [ -d "/sys/module/$module/parameters" ]; then ls /sys/module/$module/parameters/ | while read parameter; do...
I am hitting the power write governing because of the default wattage limit of 25 watts. Anyone know how to bump that up? I found this, but I cant find the fio-config app:
fio-config -p FIO_EXTERNAL_POWER_OVERRIDE <device serial number>:<power in watts>
Also found that it can be set using...
https://github.com/RemixVSL/iomemory-vsl
Just did an install this morning. You just need to replace the steps of downloading the iomemory-vsl zip file with the download from the above github link with a rename of the unzipped directory.
root@odin:~# uname -a && fio-status -a
Linux odin...
Are you guys grabbing the latest version( aka master ) of the driver? Using the one listed in the first post and trying to compile it against the 5.30.X versions of the kernel will only fail to build.
Same here. You just need to download and install the latest drivers. Only bad side affect is that I lost everything that was on the drive when the new drivers where installed. I had recent backups so not a big deal.
Made the mistake of upgrading, now gpu passthrough is broken. The VM fails to start up with the helpful error message: "failed: got timeout". Basically the same thing is syslog. If I remove the GPU, the boots and runs fine. Add the GPU back as passthrough, fails to start.
Is there any...
OK, I did use /dev/random and the host results where in the low 200MB/s range. How much of that is CPU number generation and how is disk performance. For several SSD or 15K SAS drives striped in RAID0, I would expect the write performance to be in 800+ MB/s range.
OK, I have been chasing a performance issue transferring data between two proxmox hosts and I cant seem to figure things out. The general issue is that I am seeing really low transfer rates of data between the machines. Copying a large 10GB file is only getting about 80MB/s in transfer...
OK, finally got some time to play around with things. I broke apart the bonds and just have single physical ports that go into the bridges. I current have one bridge that has a single 10gb port where the port and bridge have a MTU of 9000 for jumbo frames. I have a single linux VM...
That is completely false. Under full load a 64 thread system will out perform a straight up 32 core machine, but will not match the match the performance of a true 64 core machine with no threads. Many, many, many years of running F@H on high count thread and core machines has proved that...
How is that 'big'? You have 6 threads for the VM and the host has 10 threads sitting around for its own use. I could see having 15 threads allocated to VMs and only 1 thread left for the host usage being over taxing on the system.
For me, server A has 48 threads with 36 vCPUs allocated...
What do you mean by making the VM too big???!?!
The VMs are using the default CPU type of kvm64 and using the the VirtIO NIC type with Multiqueue enabled.
Both the host and VM have low CPU utilization while data is being transferred. The VM does have multiple vCPUs still in IO wait.
You are right, I was thinking it was giga bits per second, but it is giga bytes. About 1/4 the expected through put.
Anyone have any ideas on what might be...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.