Hi Falk,
I agree that Mellanox 4 is a PCI-E Gen3 card and even though I have a PCIe Gen5 board it will run at Gen3 speeds but at 16x its 15.7GByte/s which is more than 100Gbit/s so not sure if thats the bottleneck its somewhere else.
We had received them as IB and had changed them to Ethernet, and thanks for letting me know that we can only get upto 64 Gbit per port so its 6400MByte/s approx and its 4 channels (25G * 4 = 100G) intenrally so each channel should be limited to 1800MB/s * 4 channels but we are still nowhere near...
Network Config of one of the host. We have 2 more hosts with different IP. We will have 5-6 nodes with 5-6 NVME SSD per node eventually and more as the workload increases.
Hi Falk,
Thanks for your reply. We are in the process of migrating from the old promox cluster - EPYC 7002 based to the new one EPYC 9554 based. so as we keep getting the workload moved to the new cluster we will move the SSD from the old cluster to the new one, these SSD are WD SN650 15.36TB...
Hello team,
I am on the latest version of Proxmox with the enterprise repo.
We are moving the disks from a ZFS storage to a ceph storage and all this on NVMe Enterprise SSD PCIE 4.0 * 2 per node = 6 SSD with a 100G Ethernet network.
So the IO should be high and it should not be an issue...
Just set "tablet: 1" in a new line the file inside /etc/pve/qemu-server/<vmid>.conf and it will be all ok. also get the vmware mouse drivers. it just works perfectly aftet that
I did not try parsec but I am trying to get Moonlight to work but i am stuck with the first step. not able to get the Nvidia RTX Experience to install
See, The Nvidia A40-8Q - with 8GB vRAM on an RDP session inside proxmox, this works perfectly. But the NVIDIA RTX Experience will not...
Hi Folks,
Trying to install proxmox 8 on a EPYC 9554 based machine on a Gigabyte MZ33-AR0 Board.
The screen is cut at the bottem right corrner (image below) across mutliple reboots and you cant seem to work via IPMI.
any bright ideas
I just gave up, reinstalled, pinned the kernel to ver. 6.5 and everything just works beautifully.
only thing is that we dont have a good VDI solution as spice just sucks with no utilization of the GPU
only option is to do an RDP. so we are exploring a web based guca with some gpu acceleration...
Hi,
I have proxmox 8.1 running, will be upgrading to proxmox 8.2.2 soon.
We had one of the nodes crash and we have very slow rebuild speeds
We run AMD EPYC 7002 Series CPU with 64 Cores * 2, 2TB RAM, 15.36TB SN650 NVME SSD - WD Enterprise grade * 4 per node and we have 10G for interVM...
Hi,
Thanks for this. I have a A40 and would like to get the same with ver. 17.1 compiling. Would the same patch work or we would need a different patch for the latest version.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.