Search results

  1. [TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

    Got it figured out The following will list out all parameters for all modules currently loaded: cat /proc/modules | cut -f 1 -d " " | while read module; do echo "Module: $module"; if [ -d "/sys/module/$module/parameters" ]; then ls /sys/module/$module/parameters/ | while read parameter; do...
  2. [TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

    I am hitting the power write governing because of the default wattage limit of 25 watts. Anyone know how to bump that up? I found this, but I cant find the fio-config app: fio-config -p FIO_EXTERNAL_POWER_OVERRIDE <device serial number>:<power in watts> Also found that it can be set using...
  3. [TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox Just did an install this morning. You just need to replace the steps of downloading the iomemory-vsl zip file with the download from the above github link with a rename of the unzipped directory. root@odin:~# uname -a && fio-status -a Linux odin...
  4. [TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

    Are you guys grabbing the latest version( aka master ) of the driver? Using the one listed in the first post and trying to compile it against the 5.30.X versions of the kernel will only fail to build.
  5. [TUTORIAL] Configuring Fusion-Io (SanDisk) ioDrive, ioDrive2, ioScale and ioScale2 cards with Proxmox

    Same here. You just need to download and install the latest drivers. Only bad side affect is that I lost everything that was on the drive when the new drivers where installed. I had recent backups so not a big deal.
  6. Proxmox VE 7.2 released!

    Made the mistake of upgrading, now gpu passthrough is broken. The VM fails to start up with the helpful error message: "failed: got timeout". Basically the same thing is syslog. If I remove the GPU, the boots and runs fine. Add the GPU back as passthrough, fails to start. Is there any...
  7. (7.1) Performance Issues

    On one of the VMs: [root@sauron gondor]# fio --filename=/gondor/test.dat --name=random-write --ioengine=posixaio --rw= randwrite --bs=4k --numjobs=1 --size=4g --iodepth=1 --time_based --end_fsync=1 fio: time_based requires a runtime/timeout setting random-write: (g=0)...
  8. (7.1) Performance Issues

    OK, I did use /dev/random and the host results where in the low 200MB/s range. How much of that is CPU number generation and how is disk performance. For several SSD or 15K SAS drives striped in RAID0, I would expect the write performance to be in 800+ MB/s range.
  9. (7.1) Performance Issues

    OK, I have been chasing a performance issue transferring data between two proxmox hosts and I cant seem to figure things out. The general issue is that I am seeing really low transfer rates of data between the machines. Copying a large 10GB file is only getting about 80MB/s in transfer...
  10. (7.x) 10GB NICs at 1GB Speed

    OK, finally got some time to play around with things. I broke apart the bonds and just have single physical ports that go into the bridges. I current have one bridge that has a single 10gb port where the port and bridge have a MTU of 9000 for jumbo frames. I have a single linux VM...
  11. Nvidia Tesla vGPU mdevctl

    I am trying to do something similar here. When I run 'mdevctl types' all the 'Available instances: ' counts are zero. What does that mean?
  12. (7.x) 10GB NICs at 1GB Speed

    That is completely false. Under full load a 64 thread system will out perform a straight up 32 core machine, but will not match the match the performance of a true 64 core machine with no threads. Many, many, many years of running F@H on high count thread and core machines has proved that...
  13. (7.x) 10GB NICs at 1GB Speed

    How is that 'big'? You have 6 threads for the VM and the host has 10 threads sitting around for its own use. I could see having 15 threads allocated to VMs and only 1 thread left for the host usage being over taxing on the system. For me, server A has 48 threads with 36 vCPUs allocated...
  14. (7.x) 10GB NICs at 1GB Speed

    What do you mean by making the VM too big???!?! The VMs are using the default CPU type of kvm64 and using the the VirtIO NIC type with Multiqueue enabled.
  15. (7.x) 10GB NICs at 1GB Speed

    Both the host and VM have low CPU utilization while data is being transferred. The VM does have multiple vCPUs still in IO wait. You are right, I was thinking it was giga bits per second, but it is giga bytes. About 1/4 the expected through put. Anyone have any ideas on what might be...
  16. (7.x) 10GB NICs at 1GB Speed

    Just about every way has this behavior: sftp, rsync, copy across NFS, iSCSI, and PBS backups/restores. iperf3 show about 2.26 GBytes/sec going from VM to VM across the bonded bridges which is expected. In theory going from one ramdisk to another should see about the same results, but I am...
  17. (7.x) 10GB NICs at 1GB Speed

    I got two proxmox servers that are running the latest non-production updates of 7.x. Both machines have a dual 10gb rj-45 NIC installed with a pair of cat7 cables connecting the two machines together( port 0 on machines A direct connect to port 0 on machine B and port 1 on A direct connect to...
  18. (7.1) ZFS Performance issue

    Cool, thanks for the info Dunuin. This is going to be real helpful. Let me a couple of the combos and I will report back.
  19. Windows 11 KVM processor not supported

    I ended having to do the iso modification where you delete the requirements checker ddl file, build the iso, and then install Windows 11. I used the default KVM64 cpu type.
  20. (7.1) ZFS Performance issue

    I am interested in feedback, but I was not asking what new hardware to buy and how to structure things. I simply asked why is there a performance different between proxmox and another OS using the same hardware. Obviously the usage of the hardware will be a bit different and that is what I...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!