New packages in pvetest! Firewall, Html5 Console, Two-factor authentication

I see the Firewall is for Proxmox Nodes and VMs. Could we get away without using an main entry point Firewall? Or do we still need to have that in between Cluster and the Internet?
 
Spirit,
I just tried your mentioned method, but it is not working quite well.
I setup allowed IP in <vmid>.conf file , restarted machine and then, i changed IP address in VM (centos 6.5) and i wasn't able to ping "the internet" but this VM was pingable from outside world.
Basically, outgoing connection is not allowed (filtered) , but incoming connection is still working , even, if the IP is not allowed.
 
Last edited:
How to install 3.10 Kernel?

Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@proxmox:~# pveversion -v
proxmox-ve-2.6.32: 3.2-132 (running kernel: 2.6.32-31-pve)
pve-manager: 3.2-18 (running version: 3.2-18/e157399a)
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-14
qemu-server: 3.1-28
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-21
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-7
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-1
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
 
read the whole thread - I already posted this.

but I post it again for you:

> apt-get install pve-kernel-3.10.0.3-pve

please note, there is no openvz support in this kernel.
 
Last edited:
Hi,
the new console is very good!

One question - what's about drbd8-utils for the 3.10-kernel (8.4.3)?
Is an selfcompiling necessary? Or do you provide the package in the near future?

Udo

AFAIK there are plans for such package. => compile by yourself - if you have problems, report it a new thread.
 
Most of us already know this. But just a gentle reminder to all, DO NOT upgrade your kernel or run test packages on production environment. As it already has been pointed out few times, new kernel does not support OpenVZ. If you have production OpenVZs in your cluster, they wont work any more!
 
Maybe you could make the -w switch configurable (with a big warning) as well. I will try some other tokens ASAP.

I don't really want to increase the time span for security reasons - tokens should be in sync. Isn't it possible to resync those tokens?

Maybe I miss a point but it seems that you can configure only one OTP token type. Maybe using something like this https://code.google.com/p/mod-authn-otp/wiki/UsersFile to setup the token configuration would be better since you can add as many different tokens as you like.

Thank for that interesting link. You can currently configure one token type per realm.
 
OHHH MY GOD!

PVE Firewall!!!

My social life is ended.

I spend days and days in front of computer testing this!

I'm so happy with this new updates!

Thank you proxmox staff!
 
m.ardito,
Each VM in proxmox has unique ID number. Usually it starts with number 100.
If you have already created few VM, those VMID can be found in the VM list.
For example, in new proxmox installation, if you create a new VM, it will be auto assign with VMID: 100
Next VM will be 101, 102, etc.
Just take a look to your existing VM, you will find these numbers.
 
m.ardito,
Each VM in proxmox has unique ID number

:D yes, look my post count... I know that...

what I wasmissing is that the wiki section is talking about setting up GROUPS
and I thought there was a .fw config for GROUPS, or something, and VMID made no sense to me.

Now I read again and see that the group has to be defined in the VM .fw, that's why a VMID is required...
I'm just confused trying to understand the overall setup.
but thanks for pointing me to re-read the sentence... ;)

Marco
 
1) 5 node cluster dies with new kernel.

Tested a bit packages built from git sources before they appeared in pvetest... (also with kernel from pvetest, and built from sources with different nic's driver versions - all the same)
After updating 5 node cluster kernels to 3.10.0-3 got cluster split to 5 no quorum clusters with following spam in logs:
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed 2ee
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed 2ee 2ef 2f0 2f1 2f2
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed 2ee 2ef 2f0 2f1 2f2 2f3 2f4
Jul 21 14:36:11 proxmox3 corosync[2939]: [TOTEM ] Retransmit List: 2ec 2ed 2ee 2ef 2f0 2f1 2f2 2f3 2f4 2f5 2f6
...
Jul 21 14:37:45 proxmox3 rsyslogd-2177: imuxsock begins to drop messages from pid 2939 due to rate-limiting
Jul 21 14:37:57 proxmox3 rsyslogd-2177: imuxsock lost 33 messages from pid 2939 due to rate-limiting
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] CLM CONFIGURATION CHANGE
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] New Configuration:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.2)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.3)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.4)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] Members Left:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] Members Joined:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] CLM CONFIGURATION CHANGE
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] New Configuration:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.2)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.3)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] #011r(0) ip(10.0.0.4)
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] Members Left:
Jul 21 14:37:57 proxmox3 corosync[2939]: [CLM ] Members Joined:
Jul 21 14:37:57 proxmox3 corosync[2939]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jul 21 14:37:57 proxmox3 corosync[2939]: [CPG ] chosen downlist: sender r(0) ip(10.0.0.2) ; members(old:3 left:0)
Jul 21 14:37:57 proxmox3 corosync[2939]: [MAIN ] Completed service synchronization, ready to provide service.
....
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] CLM CONFIGURATION CHANGE
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] New Configuration:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] #011r(0) ip(10.0.0.3)
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] Members Left:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] Members Joined:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] CLM CONFIGURATION CHANGE
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] New Configuration:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] #011r(0) ip(10.0.0.3)
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] Members Left:
Jul 22 20:15:06 proxmox3 corosync[2802]: [CLM ] Members Joined:
Jul 22 20:15:06 proxmox3 corosync[2802]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jul 22 20:15:06 proxmox3 corosync[2802]: [CPG ] chosen downlist: sender r(0) ip(10.0.0.3) ; members(old:1 left:0)
Jul 22 20:15:06 proxmox3 corosync[2802]: [MAIN ] Completed service synchronization, ready to provide service.
...
If I install new kernel only on 1 of servers, I get error flood in logs but cluster works.
As soon as I update kernel on more then 1 server cluster breaks and splits...
2.6.32 kernel works perfectly, no errors at all.

2) Server with new packages does not work with old ver.3.2 servers: migration not working (different qemu versions?), html5 client from new not working on old servers (different auth type?)..

3) What about making possible to manage all vm's firewalls from datacenter? Would be nice to see in one place a list of all 'firewalled' vm's.

4) Pleeeease make some kind of custom editable folders in 'view' left side to group vm's... Its about impossible to administrate >100 vm's in cluster without manual grouping and sorting...
Right now to sort vm's as I wish and not by vm ids I use this hack:
- special sortable vm names (win-server-01, win-server-02, vdi-win7-01, vdi-win7-02, test-centos7-201407 ..etc..)
- added one more 'view' type 'by name':
in pvemanagerlib.js after
folder: {
text: gettext('Folder View'),
groups: ['type']
},
added this:
name: {
text: gettext('Name View'),
groups: ['name']
},
Stup1d hack, but the only possibility for me to sort vm's by type (in name).
 
Last edited:
To isolate groups of VMs from each other i have been using virtualized firewall with bridges. Is it safe to say with this PVE Firewall i no longer need to install virtual firewall in between VMs and internet?
 
Zones

The Proxmox VE firewall groups the network into the following logical zones:

  • host: traffic from/to a cluster node
  • vm: traffic from/to a specific VM
Fore each zone, you can define firewall rules for incoming and/or outgoing traffic.

If I understand this correctly: the settings for host zone are inside /etc/pve/firewall/cluster.fw and settings for the vm zone are inside /etc/pve/firewall/{VMID}.fw ?
 
To isolate groups of VMs from each other i have been using virtualized firewall with bridges. Is it safe to say with this PVE Firewall i no longer need to install virtual firewall in between VMs and internet?
I'm doing the same in many cases. It gives me a lot more flexibility, more functions when I can install my favourite firewall appliance if I want (sometimes I do). I'll keep it that way. Firewall on the PVE host IMO is useful for smaller installations, or for beginners, or when a possibly expensive external firewall is not feasible. But I don't really see the point of reinventing the wheel - good, well maintained iptables-based firewall packages already exist and PVE could just add a wrapper or maybe an easy to use web interface around one of them.

OTOH, the new features look really great. I haven't had the time to check them out but surely will. The noVNC integration is especially interesting and useful. PVE is just getting better with every release. I hope they can keep up, since they're using their own, specific, albeit open sourced solution for every aspect of the VM stacks.
 
Now I read again and see that the group has to be defined in the VM .fw, that's why a VMID is required...
Security groups are 'defined' in the cluster.fw file. But you can 'use' them inside a VM firewall configuration.
 
To isolate groups of VMs from each other i have been using virtualized firewall with bridges. Is it safe to say with this PVE Firewall i no longer need to install virtual firewall in between VMs and internet?

Yes.
 
Hi,

I have just played around with the new amazing firewall features. But it looks like currently only IPv4 is supported. I guess you're planning IPv6 as well? :)

Thanks, Martin
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!