Hi all,
We are looking in to deploying a new refurbished NVME HCI Ceph Proxmox cluster.
At this point we look at 7 nodes, each with 2 NVME OSD drives, with expansion for 2 NVME OSD's more.
As we would quickly saturate a 25GbE link we should be looking in to 40/50/100 GbE links and switches...
after a few month after installation i have degradation in bandwidth between PVE host and VM on windws and CT :
1.after installing and configuring PVE an ct (linux ubuntu ) i test bandwidth with iperf3 an get about 95 Gigabit/s
after few monfs i have only 60-70 Gigabits/s
ID] Interval...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.