Hello,
we are using Proxmox to host our backupserver with veeam Backup to save our VMWare environment and also to host some other servers.
Currently we are backing up with a fibrechannel card which is passed through to the VM which works fine with its physical 8GB link directly into the FC storages.
Now we plan to change everything, getting a complete new VMWare environment (I know, as a proxmox fan it is a blame to use VMWare, but it is a political descision made by our company leaders). Backup should now go through ethernet.
I connected a 2x SFP+ card to the host and it will get 2x 25Gbit transceiver connected via fibre to a Ethernet Switch per LACP for the future backup (so a theoretical Speed at <= 50 Gbit/s may be possible).
Passing this card through results in unstable hardware, seems this card is "tricky" and does not like passing through. As this card is recognized in Proxmox correctly I decided to mount it to a vmbr with LACP (and get some percent performance loss).
As the Windows Server hosted by Proxmox tells "10 Gbe" with the virtio driver (most people tell, that it is only a cosmetical number) I tested with iperf between 2 VMs on the same proxmox host and on the same bridge. I am "only" getting a maximum of 10 Gbit/s - not more. The Server is a lenovo SR650 V1 with "32 x Intel(R) Xeon(R) Silver 4215R CPU @ 3.20GHz (2 Sockets)" which should have more than enough power to transmit more than 10 Gbit/s virtually.
Otherwise I remember I saw somewhere else 100 Gbit/s on virtio drivers in the past... now my question is:
Is 10 Gbit/s a cosmetical number where the real measurement is randomly the same
or
Is 10 Gbit/s the real maximum speed by the driver / virtio and is there any trick to bump it to 100 Gbit/s and get the full power of the hardware?
Thank you for any help
Dirk
we are using Proxmox to host our backupserver with veeam Backup to save our VMWare environment and also to host some other servers.
Currently we are backing up with a fibrechannel card which is passed through to the VM which works fine with its physical 8GB link directly into the FC storages.
Now we plan to change everything, getting a complete new VMWare environment (I know, as a proxmox fan it is a blame to use VMWare, but it is a political descision made by our company leaders). Backup should now go through ethernet.
I connected a 2x SFP+ card to the host and it will get 2x 25Gbit transceiver connected via fibre to a Ethernet Switch per LACP for the future backup (so a theoretical Speed at <= 50 Gbit/s may be possible).
Passing this card through results in unstable hardware, seems this card is "tricky" and does not like passing through. As this card is recognized in Proxmox correctly I decided to mount it to a vmbr with LACP (and get some percent performance loss).
As the Windows Server hosted by Proxmox tells "10 Gbe" with the virtio driver (most people tell, that it is only a cosmetical number) I tested with iperf between 2 VMs on the same proxmox host and on the same bridge. I am "only" getting a maximum of 10 Gbit/s - not more. The Server is a lenovo SR650 V1 with "32 x Intel(R) Xeon(R) Silver 4215R CPU @ 3.20GHz (2 Sockets)" which should have more than enough power to transmit more than 10 Gbit/s virtually.
Otherwise I remember I saw somewhere else 100 Gbit/s on virtio drivers in the past... now my question is:
Is 10 Gbit/s a cosmetical number where the real measurement is randomly the same
or
Is 10 Gbit/s the real maximum speed by the driver / virtio and is there any trick to bump it to 100 Gbit/s and get the full power of the hardware?
Thank you for any help
Dirk