After reading the issues with network interfaces changing on the upgrade to 8.2 I went through the 7 nodes I upgraded yesterday and used the instructions HERE to override the device names to eth# and each name is associated to the interface's MAC address which is a consistent value. It also...
I have VMs that connect to mutiple VLANs and I have a single bridge that they all connect to and if I do not specify a VLAN on the VMs interface they get the untagged VLAN for that bridge and if I specify one on the interface they then used the tagged VLAN. For each VM that needs to be connected...
I just finished the first batch of updates and had no issues on any of the systems I updated. With the possible issue around interface renaming, I did do the assign network naming changes that have been mentioned before and rebooted after applying those changes before doing the update.
So far...
I have a DIY pikvm setup that is connected to an 8 port kvm switch to add sorta a poor-man's IPMI to some of my servers that are too old for the virtual console in their IPMI to work with a modern browser.
I have been thinking of keeping this setup and possibly adding a second one if and when I...
Are you wanting to do any PCI-e passthrough or looking to do any other let's call it a more advanced or complicated configuration? Or are you just looking to install Proxmox VE and run 2 simple VMs? If they are more comfortable with Windows could you not use something like RDP and VirtualBox...
Are you able to ping say Google or another external destination on all your nodes? Have you reviewed your networking configuration on all nodes to confirm they are correct? Have you check your upstream switches and their setting if you are using any management features such as VLANs or LACP...
I'm in the same camp as @leesteken would check with the writer of the script as they are essential unsupported modifications to the Proxmox VE environment. However have you looked at the scripts and tried running the commands manually on the node that is having the issue to see if you can get...
I wanted all devices on my network to use the same time, this way only 1 device is going to the internet to get the time and the rest are getting it from that device. Eventually I would like to move to a raspberry pi or simular device thar can provide time through GPS but it will still probably...
I use pfSense but same difference and have all of my Proxmox hosts and VMs using pfSense for the NTP source of truth. I have pfSense going out to the internet to the an NTP time and while my pfSense is a VM within my cluster having 7 nodes running in a HCI setup I haven't had any issues other...
I use both tagged and untagged VLANs but am using the OVS networking that is available within Proxmox, as I always seemed to have issues when I started out with Proxmox and Linux Bridges for networking and haven't gone back to try I again.
You need to disable the enterprise repositories and enable the no subscription repositories, then you will be able to install any updates and install Ceph. Once you have a subscription then you can use the enterprise repositories.
If you are using a Linux Bridge that is VLAN aware or a OVS Bridge you should be able to just add a VM's or LXC's interface to the corresponding bridge and it should work using the untagged/native VLAN. However if you are trying to do this with an SDN setup I am still working out how to deal...
have you created a bridge named "SharSvcs"? can you also post the following outputs:
System -> Network for each node in your cluster
Datacenter -> SDN
Datacenter -> SDN -> Zones
Datacenter -> SDN -> VNets
From what I know in my use case the biggest different for me was that OVS supported VLANs without having to specify it each time, but I haven't really looked into the difference between them in detail, I only did a little digging when I was first learning proxmox and needed to have VLAN support...
I have done more experimenting with the SDN and the need to always create a VLAN zone with a tagged VLAN and the simple zone not allowing to associate to a bridge is leaving me with only 2 options that I can see for now:
Leave everything as is with multiple OVS Bridges and specifying the VLAN...
You can use the HA features of Proxmox to have the VM fail over to another node automatically...My back hauls are not very quick right now with only using LCAP bonded 1GB links but it works pretty decent for my needs and will be scaling up to better hardware once i have a full plan on the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.