Admin PC and PVE cluster separated by FW, what ports to open for full admin access?

stefws

Renowned Member
Jan 29, 2015
302
4
83
Denmark
siimnet.dk
Trying to make PoC test lab of PVE here

Appreciate any pointers which can clarify howto configure a FW between an Admin PC/WS and a PVE 3.3 cluster in order to have full admin access.

Found this: http://pve.proxmox.com/wiki/Ports

But when accessing the web admin 'panel' on one node I can not access the other nodes through the web panel, connection error (5xx) keeps spinning (assume I might need other access that the current access to all cluster nodes' port 8006 :)

/Newbie

BTW! changed hardcoded port 8006 in pveproxy to listen on other port, seems to work fine

PS! Just bought the Mastering eBooks+Print, nothing in here ImHO
 
Re: Admin PC and PVE cluster separated by FW, what ports to open for full admin acces

seems my 3.3 test lab is having cgi/script issues return 5xx http errors :(

from pveproxy access log I see things like this:

<IP> - root@pam [30/Jan/2015:00:25:56 +0100] "GET /api2/json/nodes/node2/ceph/pools HTTP/1.1" 500 13

Wonder where doc root of pveproxy is...
 
Re: Admin PC and PVE cluster separated by FW, what ports to open for full admin acces

Hmm weird, doing CLI seems to work:

root@node2:/usr/bin# ceph osd lspools
5 rbd_data,6 vm_images,

Wondering if doc root is embedded pseudo in pveproxy...
 
Re: Admin PC and PVE cluster separated by FW, what ports to open for full admin acces

It is REST ops are implemented in Perl; this among other ops must fail somehow, hmm:

use base qw(PVE::RESTHandler);


__PACKAGE__->register_method ({
name => 'index',
path => '',
method => 'GET',
description => "Pool index.",
permissions => {
description => "List all pools where you have Pool.Allocate or VM.Allocate permissions on /pool/<pool>.",
user => 'all',
},
parameters => {
additionalProperties => 0,
properties => {},
},
returns => {
type => 'array',
items => {
type => "object",
properties => {
poolid => { type => 'string' },
},
},
links => [ { rel => 'child', href => "{poolid}" } ],
},
code => sub {
my ($param) = @_;


my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();


my $res = [];


my $usercfg = $rpcenv->{user_cfg};


foreach my $pool (keys %{$usercfg->{pools}}) {
next if !$rpcenv->check_any($authuser, "/pool/$pool", [ 'Pool.Allocate', 'VM.Allocate' ], 1);


my $entry = { poolid => $pool };
my $data = $usercfg->{pools}->{$pool};
$entry->{comment} = $data->{comment} if defined($data->{comment});
push @$res, $entry;
}


return $res;
}});
 
Re: Admin PC and PVE cluster separated by FW, what ports to open for full admin acces

then changing REST call to another node from my cluster I get something like this:

<ip> - root@pam [30/Jan/2015:01:07:18 +0100] "GET /api2/json/nodes/node3/status HTTP/1.1" 595 -

howto debug pveproxy rest calls, is it possible to turn some level of debug/tracing on?
 
Re: Admin PC and PVE cluster separated by FW, what ports to open for full admin acces

then changing REST call to another node from my cluster I get something like this:

<ip> - root@pam [30/Jan/2015:01:07:18 +0100] "GET /api2/json/nodes/node3/status HTTP/1.1" 595 -

howto debug pveproxy rest calls, is it possible to turn some level of debug/tracing on?

Turned out that the REST API2 port is hard code in several files:

root@node3:~# find /usr/share/perl5/PVE/ -type f -exec grep -l 8006 {} \;
/usr/share/perl5/PVE/API2Client.pm
/usr/share/perl5/PVE/Firewall.pm
/usr/share/perl5/PVE/HTTPServer.pm
root@node3:~# find /usr/bin -type f -name 'pve*' -exec grep -l 8006 {} \;
/usr/bin/pvebanner
/usr/bin/pveproxy

also altering these makes things work much better :)