Might just be able to follow this example and create a custom Security Group named proxmox and add our 2x corosync rings here and follow by a custom default drop/reject policy. Will go test this...
If I wanted to swap udp ports for ring 0 and ring 1, so ring 0 (picked up as default localnet) gets default multicast udp port 5404:5405, how could I then reconfigure the corosync networks without stopping the running VMs and avoiding getting NMIs on the hypervisor nodes?
then I could at least...
Weird, as I can create/add new groups just fine from the Web UI, in fact cluster.fw initially got created by the Web UI (blowing the cluster as I forgot to accept input before enabling the FW :)
Got a non-std corosync config here with a static nodelist and 2x rings across each their redundant switch networks:
This imposes some issues when wanting to use the firewall, as PVEFW-HOST-IN/OUT chains says:
Could I modify these standard created chains to allow for both our corosync rings...
When ever trying to rename a Security Group I get this error in a popup window:
detected modified configuration - file changed by other user? Try again. (500)
leave me to mod the /etc/pve/firewall/cluster.fw manually :/
Hm partly seems to work, activated MQ on a CentOS 6.7 VM 401:
But with a GAiA CheckPoint FW VM 500 (running an older kernel 2.6.18-92cpx86_64) this fails:
and I see following in the hypervisor log from time to time:
Got a Check Point firewall VM, which at relative high work load max'es it's vCore 0 at +95% kernel land usage, so am looking to possible turn on multi queued NICs.
Only wondering if we need to do something on the KVM backend for this to work. Hints appreciated!
TIA
I maybe could... only it shows it self no better perf. wise :confused:, it might be the older VM guest drivers vs the newer KVM that's causing this 5x fold CPU hit. Also dunno if this also unsupported NIC by PA (they only says e1000) is stabil...
It's not the same with our other VMs running CentOS 6.7 w/elrepo kernel-ml 4.5.1.
Assume that the linux distro under Palo Alto VM series is also Redhat/CentOS 6 based only older maybe v.6.3, at least it's based on kernel 2.6.32. PA-VM200 crashes randomly when vNIC are Intel e1000 emulation...
Upgraded 4 of 7 nodes today only to discover than especially two VMs running (Palo Alto - VM200 FWs) use much more CPU than when on pve 4.1 :(
Pic 1 here shows VM usage last 24 hour and the jump when migrated onto 4.2.22 around 17:00, the last high jump is me introducing more load on the FW...
Improving, but not quite there yet. I boot from VE iso, break to a prompt, config simple network to get access to the world and then do this:
$ vgchange -a y pve
$ mkdir /tmp/a; mount /dev/pve/root /tmp/a
$ mount -o bind /sys /tmp/a/sys; mount -o bind /dev /tmp/a/dev; mount -o bind /proc...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.