Hi All
We have two own [assumble] servers with back to back fiber channel connectivity. We are using SSD disks for testing. [We are 100% sure all hardware components [FC-FCOE, SSD, SATA controller, Mother board] support more than 6 Gbps]
If we are running plain Windows servers on both servers, bandwidth is 6.1 gbps.
But when we are running Windows servers as a VMs on Proxmox 3.4, bandwidth 3.5 gbps.
We created VM's using virtio [virtual Nic & disks]. Virtual disk as a RAW.
I am unable to understand where is problem, Is proxmox kernel having any bandwidth limitation.
Please help me regarding this..
Thank you so much for advance....
We have two own [assumble] servers with back to back fiber channel connectivity. We are using SSD disks for testing. [We are 100% sure all hardware components [FC-FCOE, SSD, SATA controller, Mother board] support more than 6 Gbps]
If we are running plain Windows servers on both servers, bandwidth is 6.1 gbps.
But when we are running Windows servers as a VMs on Proxmox 3.4, bandwidth 3.5 gbps.
We created VM's using virtio [virtual Nic & disks]. Virtual disk as a RAW.
I am unable to understand where is problem, Is proxmox kernel having any bandwidth limitation.
Please help me regarding this..
Thank you so much for advance....