Hi,
we have configured an external ceph with infiniband and added it to proxmox cluster.
The ceph has a public network (172.16.65.0/24) and a cluster network (10.16.70.0/24).
We also added the ceph pool to storage.cfg:
rbd: ssd01
content images
monhost 172.16.65.21 172.16.65.22 172.16.65.112
pool ssd01
username admin
where monhost is the public network.
If we now install a VM on ssd01 and make a "dd" inside that host, network traffic on vmbr0 (172.16.65.0/24) exceeded its maximum, so i believe that ceph cluster traffic is going via vmbr0 and not via infiniband.
Can i force proxmox to use network 10.16.70.0/24 even the ceph monitors are installed in 172.16.65.0/24?
Regards,
Volker
we have configured an external ceph with infiniband and added it to proxmox cluster.
The ceph has a public network (172.16.65.0/24) and a cluster network (10.16.70.0/24).
We also added the ceph pool to storage.cfg:
rbd: ssd01
content images
monhost 172.16.65.21 172.16.65.22 172.16.65.112
pool ssd01
username admin
where monhost is the public network.
If we now install a VM on ssd01 and make a "dd" inside that host, network traffic on vmbr0 (172.16.65.0/24) exceeded its maximum, so i believe that ceph cluster traffic is going via vmbr0 and not via infiniband.
Can i force proxmox to use network 10.16.70.0/24 even the ceph monitors are installed in 172.16.65.0/24?
Regards,
Volker