Hi, I have a Proxmox host running a dedicated server from OVH. They provide both a single public IPv4 address, and a IPv6/64 block. I'd like to route all the IPv4 traffic from containers on my host through NAT, but set a proper public IPv6 for direct access.
I've succeeded in getting the IPv4...
I have a Kubernetes VM and an unrelated VM on the same host in a cluster using an ovs-bridge. I'm able to ping between the two VMs but when I try to ping from a kubernetes container it fails. Using tcpdump on the ovs-bridge interfaces I can see ping and response on the unrelated VM's interface...
we observe a strange problem here (Cluster on PVE 7.2-3 with open vSwitch): The ICMPv6 NS (Neighbor Solicitation) packet does not seem to arrive inside the VM (it is a Cisco C9800-CL Wireless Controller) when the vNIC has "firewall=1" set. As soon as we remove the "firewall=1"...
I am building a 3 node testing cluster where each server has 2x 1Gbps ports and 2x 10Gbps ports. 1Gbps ports are connected to different switches within same network (simulating a real environment) and 10Gbps ports are connected directly between servers. I read the following guides and my...
Ich habe vor Kurzem OpenVSwitch auf meinem (einzelnen) Node installiert, die Bridge und IntPorts für die einzelnen VLANs eingerichtet. Der PVE Host sollte auch über ein VLAN erreichbar sein, dazu habe ich beim entsprechenden IntPort mit dem VLAN Tag die IP-Adresse und das...
Would be nice if Proxmox add option to handle patch ports for OpenVSwitch in UI.
I achieved a working configuration by:
iface lo inet loopback
iface enp2s0 inet manual
As stated in the title, I upgraded last night. I made no changes to my network interfaces file, and now my network is inaccessible. The only way I can get networking to work between my LAN network (and none of my others) is reverting to the original network config provided when I installed PVE5...
We use PVE 6 (pve-manager/6.4-13) with openvswitch network configuration and use ceph. And now we having problems with network routers. There is a buffer overflow on them since the traffic from the cluster is not marked
Is it possible to configure traffic marking, for example, ceph...
After upgrading to ifupdown to ifupdown2 (as part of PVE7), I have a problem to configure NetFlow with ovs_extra in /etc/network/interfaces. Following statement worked in ifupdown:
ovs_extra --id=@nf create NetFlow targets=\"10.0.20.20:9995\" active_timeout=60 -- set Bridge vmbr0 netflow=@nf
I just updated PVE from 6 to 7 with an OpenVSwitch configuration.
With ifupdown it seems like my /etc/network/interfaces will be ignored.
Installing ifupdown2 solved the problem - though you have to setup a temporary internet connection to download the package.
It's odd that ifupdown2...
After recently upgrading to the latest version we started seeing these errors in the kernel on a few nodes.
We are using openvswitch, the only thing I found using google that might explain the problem is this:
Before the update we were running kernel...
I suppose that my knowledge about switching is strong for physical switch, but when i start use proxmox i feel a little confuse :(
Initial setup is quite simple:
2. 1st VM
3. 2nd VM
Actually i created some VM with debian on the board. Installed vSwitch from repo, and...
I have set up a cluster with VMs on different host nodes: h1, h2, h3, h4
I have used a OpenVswitch bridge (vmbr2) defined on h1:
iface vmbr2 inet manual
post-up ovs-vsctl add-port vmbr2 gre1 -- set interface gre1 type=gre options:remote_ip=''ip...
I have 3 servers:
server one, public IP: 220.127.116.11
server two, public IP: 18.104.22.168
server three, public IP: 22.214.171.124
All of them have set up private network: 10.0.0.0/8
I would like to set up GRE tunnel between them for this private network, using public interfaces (through NAT).
I am running openvswitch on my Proxmox 6.1 server. I installed pfSense as a VM and for the most part it running well. Recently, I tried to install as a VM, OpenMediaVault, which seemed to have problems when it tried to connect repository mirrors. I assumed that there might have been a problem...
I hope the community could help me with my problem.
I have 3 VMs, each one with an OVS with bridge "br0". The VM1 and VM2 are connected via a GRE tunnel to the Server and via VXLAN tunnels between both.
Whenever I run:
arping -I br0 10.0.1.10 (or)
arping -I br0...
I have set up GRE tunnel using OpenVSwitch as stated in this tutorial: https://documentation.online.net/en/dedicated-server/tutorials/network/rpn-proxmox-openvswitch
Nodes are communicating between each other using private network.
On first glance everything seems fine - VMs does see each...
I have noticed that the openvswitch-* packages are not available in the proxmox buster repo (there are avail. in the jessie repo though). Is this something intentional. Currently there is only one way to install openvswitch into buster, from the official debian repos, but that's not...