Question: InterVM 10Gbe network between VMs? #rookCeph #k3s

NZVengeance

New Member
Feb 2, 2024
2
0
1
Hey everyone,
I am running my kubernetes nodes on my Dell R730
At the moment they are using a bridged Network on a 10Gbe NIC…But I only have a 1Gbe switch.\

That said, the server has a second (unused 10Gbe Nic) eno2
I was wondering if It would be possible to create a “logical” inter-vm network using this NICs hardware, without having to plug a cable into it?

The hope is to be able to bind my rook depth containers to that NIC so that they can very quickly replicate data.

I have no idea if this is possible, but thought this was the best place to ask :D
 
Hi,
I am running my kubernetes nodes on my Dell R730
this is a forum for Proxmox related Software? How is this related to your kubernetes cluster? Do you run your kubernetes nodes as VMs on Proxmox VE? Please explain your setup in more detail for us to help.

I was wondering if It would be possible to create a “logical” inter-vm network using this NICs hardware, without having to plug a cable into it?
If above holds to be true and you are running the kubernetes nodes on top of Proxmox VE, you can simply assign the VMs vNICs to the same Linux bridge. This bridge does not even have to be connected to any physical network port, you could even set up a dedicated bridge and vNICs just for your inter VM network.

Hope this helps!
 
Apologies. Yes.
I am running my nodes as VMs on my Proxmox Server. 3 VMs with SSD passed directly to the VMs 1 SSD for OS, one dedicated for rook ceph.

What would the speed performance be with a vNIC? I am aiming to achieve something greater than 2.5Gbe (as to close to 10Gbe as possible would be awesome)
 
Apologies. Yes.
I am running my nodes as VMs on my Proxmox Server. 3 VMs with SSD passed directly to the VMs 1 SSD for OS, one dedicated for rook ceph.

What would the speed performance be with a vNIC? I am aiming to achieve something greater than 2.5Gbe (as to close to 10Gbe as possible would be awesome)
There is in theory no limit for the network speed over the Linux bridge, in practice you will be limited by CPU speed and disk I/O when writing data to disk. So easiest way is probably to make such a setup and do some performance testing using iperf and fio.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!