Does anyone have a documented best practice procedure for installing newer nvidia drivers in Proxmox (7.4)? If I understand correctly, running the following command will only install the latest "supported" driver which is 470.129.06.
Previously I've installed the drivers via the NVIDIA*.run...
Thanks for the reply but if you saw my first reply today I proved it's not the Mikrotik switch. I spun up a fresh Proxmox host configured identically on the switch side as my current Proxmox host and all is working perfectly on that host.
Not sure if that's my problem then. I've definitely never intentionally installed SDN except for ifupdown2 so that I could apply network changes without reboot.
So I spun up a new Proxmox host with the same network config about and the same switch config and everything works. So something is going on with my current Proxmox host. When I do ip link show I see a TON of interfaces. I recognize 1 - 11 but after that, not a clue. What are all of these...
Is there some VLAN aware setting that I'm not aware of? Only VLAN setting for the bridge that I was aware of is enabling VLAN filtering. That is enabled on the bridge. I have also confirmed that the VLANs in question here including 110 are all tagged on the bond interface. I have VLANs tagged...
Oh sorry I misunderstood. No. If I assign 110 to a VM I also get nothing.
Does the Proxmox config look correct? If so I'll focus on my switch side even though there shouldn't be much to it that except tagging the VLANs on the bond ports.
I'm using a pair of NICs bonded together from a Mikrotik CRS354 running RouterOS. On the Mikrotik side I have VLAN 110, 120, and 140 tagged on the bond. I have Proxmox management running on VLAN 110 and that is working fine. If I try to tag 120 or 140 to a VM, I can't communicate over those...
I would like to remove the original corosync network interface (Link 0 in the below screenshot) I used when I first setup my cluster. How can I do this?
I've followed these steps to add a second redundant link for the cluster network on an existing cluster. However, how can I set the priority to prefer the newly created link over the link that was originally setup at the time of the cluster creation...
I'm seeing these messages very often in my syslog as I'm troubleshooting some unexpected node reboots (a few times a day). Is this being caused by the network dropping out?
Mar 16 14:17:02 athens corosync[6092]: [KNET ] link: host: 1 link: 0 is down
Mar 16 14:17:02 athens corosync[6092]...
Oh really? I thought it was just showing a duplicate since the bus is identical and it only shows one of the interfaces as Ethernet. Is that how it looks for you as well?
I installed dual port ConnectX-3 NICs into two different Proxmox 7.1 servers and I'm only seeing a single port on each show up. These NICs were pulled from working Unraid (Slackware) servers where both NICs showed up fine and were functioning. I had previously configured these NICs to run in...
I can not believe how difficult it is to install nvidia drivers on proxmox. This seems like something that Proxmox should be providing documentation on.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.