Well your config also seems quite fancy to me ^^. Well im now using a MESH network based on RSTP and 10 Gbit on SFP+ which is working very very well with quite less config and I could potentialy extend by more hosts at any time, have a look at this:
First node:
auto vmbr4
iface vmbr4 inet...
Hello,
I've got a small setup at Hetzner with a dedicated 10 Gbit/S SFP+ Switch (EdgeSwitch 16 XG) to interconnect my Hosts privatly.
The switch is also 802.1q capable but sadly im only able to make Q-In-Q working between 2 hosts and not three as i have to.
This is how my config looks like...
Hello,
i just replaced the NIC's on all my 3 nodes and i upgraded to 10 Gbit's SFP+.
All nodes can access each other over ssh without any problems.
But since i replaced the NIC's im not able to access the nodes at the web GUI or at least node1 cannot access node 3 or node 2 anymore.
I only see...
So in general i should do:
systemctl disable pve-firewall.service
ln -s /dev/null /etc/systemd/system/pve-firewall.service
systemctl unmask pve-firewall.service
Please correct me here if im wrong?!
Hello,
i have a small pve cluster in place and have a misconfigured firewall so that i cannot access my hosts anymore (lock-out scenario).
I can only boot onto the rescue mode of hetzner and access my hardisk to change things but it seems that many things have changed since pve 6 and i dont...
dude, you have to do the following:
rm /etc/machine-id
touch /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id
in order to make generate a new machine ID working.
Good Moring,
my idea was to put all tenants onto the same vmbr and the same vxlan and only do the seperation by a VLAN-tag at the ProxMox GUI.
Currently i'm not able to use this feature even if i set vlan_aware to yes at the vmbr device (Linux Bridge).
Currently i have a seperated vxlan with no...
Is there any way to make this work with the "unicast mode" and vlan_aware shown here: https://git.proxmox.com/?p=pve-docs.git;a=blob_plain;f=vxlan-and-evpn.adoc;hb=HEAD?
Currently this fails on my side :(
Hallo,
ich bin jetzt schon seit über 10 std. dran zwischen meinen 3 nodes eine OVS Bridge zu konfigurieren. ZIel ist es das ich meinen VM's nur noch ein VLAN-tag zuweisen muss damit die Kommunikation funktioniert. Aber offenbar scheint der Private Switch den Hetzner anbietet ein reiner Layer2...
Hab ein ähnliches Problem, hab schon 10 verschieden Anleitungen durch, alles scheint hier nicht zu helfen, egal ob selbst gebaute images oder offizielle, die cloud-init config wird beim ersten boot überhaupt nicht gezogen.
Just to make things clear here. In general it's not a good idea to have the public bandwidth, Corosync, Privat VM network etc etc. running all on a single Physical NIC with VLANs as this can lead to time-out's, fancying on high load or a other behaviours. If you have around 30-40€ more to spend...
Actually the most simply way to add a VLAN based Public routed Subnet like a /28 you can buy additionally for your vSwitch are those lines:
auto lo
iface lo inet loopback
iface lo inet6 loopback
auto enp35s0
iface enp35s0 inet static
address xxx.xxx.xxx.xxx
netmask...
Hello,
i would like to know the risk to run a LXC container with the following ruleset in a shared public env.:
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.cgroup.devices.allow: c 10:200 rwm
lxc.hook.autodev: sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev...
Konnte das Problem endlich lösen. War aber alles irgendwie höchst seltsam. Hab die OVS brücke wieder abgerissen und eine Linux bridge mit vlan aware genommen. funktioniert bis dato ganz gut. Ist ja nur ein lokaler host ^^
Hallo,
ich musste gestern leider sehr leidsam feststellen das ich nach dem update auf den Kernel 4.15.18-4-pve kein routing mehr mittels OVS/VLAN abgebildet bekomme. Es wird schlechtweg nichts kommuniziert. Alles steht zwar auf grün in den Logs aber einen ping bekomme ich nicht durch. Geändert...
Hello,
i also have a Ryzen 1700 CPU and i also had some performance issue. After some trying i found out that i get rid of all the performance issue by setting the CPU type to "qemu64". Have a try ;)
Greetings
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.