I see the Firewall is for Proxmox Nodes and VMs. Could we get away without using an main entry point Firewall? Or do we still need to have that in between Cluster and the Internet?
...
...- New 3.10 Kernel (based on RHEL7, for now without OpenVZ support)
...
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@proxmox:~# pveversion -v
proxmox-ve-2.6.32: 3.2-132 (running kernel: 2.6.32-31-pve)
pve-manager: 3.2-18 (running version: 3.2-18/e157399a)
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-14
qemu-server: 3.1-28
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-21
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-7
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-1
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
Hi,
the new console is very good!
One question - what's about drbd8-utils for the 3.10-kernel (8.4.3)?
Is an selfcompiling necessary? Or do you provide the package in the near future?
Udo
Maybe you could make the -w switch configurable (with a big warning) as well. I will try some other tokens ASAP.
Maybe I miss a point but it seems that you can configure only one OTP token type. Maybe using something like this https://code.google.com/p/mod-authn-otp/wiki/UsersFile to setup the token configuration would be better since you can add as many different tokens as you like.
m.ardito,
Each VM in proxmox has unique ID number
If I install new kernel only on 1 of servers, I get error flood in logs but cluster works.Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed 2ee
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed 2ee 2ef 2f0 2f1 2f2
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed 2ee 2ef 2f0 2f1 2f2 2f3 2f4
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed 2ee 2ef 2f0 2f1 2f2 2f3 2f4 2f5 2f6
...
Jul 21 14:37:45 proxmox3 rsyslogd-2177: imuxsock begins to drop messages from pid 2939 due to rate-limiting
Jul 21 14:37:57 proxmox3 rsyslogd-2177: imuxsock lost 33 messages from pid 2939 due to rate-limiting
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] CLM CONFIGURATION CHANGE
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] New Configuration:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.2)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.3)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.4)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] Members Left:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] Members Joined:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] CLM CONFIGURATION CHANGE
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] New Configuration:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.2)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.3)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.4)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] Members Left:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] Members Joined:
Jul 21 14:37:57 proxmox3 corosync[2939]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jul 21 14:37:57 proxmox3 corosync[2939]: [CPG ] chosen downlist: sender r(0) ip(10.0.0.2) ; members(old:3 left:0)
Jul 21 14:37:57 proxmox3 corosync[2939]: [MAIN ] Completed service synchronization, ready to provide service.
....
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] CLM CONFIGURATION CHANGE
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] New Configuration:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] #011r(0) ip(10.0.0.3)
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] Members Left:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] Members Joined:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] CLM CONFIGURATION CHANGE
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] New Configuration:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] #011r(0) ip(10.0.0.3)
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] Members Left:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] Members Joined:
Jul 22 20:15:06 proxmox3 corosync[2802]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jul 22 20:15:06 proxmox3 corosync[2802]: [CPG ] chosen downlist: sender r(0) ip(10.0.0.3) ; members(old:1 left:0)
Jul 22 20:15:06 proxmox3 corosync[2802]: [MAIN ] Completed service synchronization, ready to provide service.
...
added this:folder: {
text: gettext('Folder View'),
groups: ['type']
},
Stup1d hack, but the only possibility for me to sort vm's by type (in name).name: {
text: gettext('Name View'),
groups: ['name']
},
Zones
The Proxmox VE firewall groups the network into the following logical zones:
Fore each zone, you can define firewall rules for incoming and/or outgoing traffic.
- host: traffic from/to a cluster node
- vm: traffic from/to a specific VM
I'm doing the same in many cases. It gives me a lot more flexibility, more functions when I can install my favourite firewall appliance if I want (sometimes I do). I'll keep it that way. Firewall on the PVE host IMO is useful for smaller installations, or for beginners, or when a possibly expensive external firewall is not feasible. But I don't really see the point of reinventing the wheel - good, well maintained iptables-based firewall packages already exist and PVE could just add a wrapper or maybe an easy to use web interface around one of them.To isolate groups of VMs from each other i have been using virtualized firewall with bridges. Is it safe to say with this PVE Firewall i no longer need to install virtual firewall in between VMs and internet?
Security groups are 'defined' in the cluster.fw file. But you can 'use' them inside a VM firewall configuration.Now I read again and see that the group has to be defined in the VM .fw, that's why a VMID is required...
To isolate groups of VMs from each other i have been using virtualized firewall with bridges. Is it safe to say with this PVE Firewall i no longer need to install virtual firewall in between VMs and internet?