cannot access web interface on a standalone 4.2 pve server

acidrop

Renowned Member
Jul 17, 2012
204
6
83
Hello,

Just installed PVE 4.2 ontop of a blank Debian Jessie.

I followed this link from wiki in order to do that.

Initially I could access the webif without any issues, but after installing openvswitch on the same machine I can't anymore, although I can access normally the machine via ssh.

Below follow some logs:

Code:
root@debian:/# cat /etc/hosts
127.0.0.1    localhost
192.168.0.38    pve.domain.local pve pve.localhost


# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Code:
root@debian:/# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback


allow-vmbr0 eth0
iface eth0 inet manual
    ovs_type OVSPort
    ovs_bridge vmbr0

auto vmbr0
iface vmbr0 inet static
    address  192.168.0.38
    netmask  255.255.255.0
    gateway  192.168.0.1
    ovs_type OVSBridge
    ovs_ports eth0


Code:
root@debian:/# tail -n 20 /var/log/syslog
May  1 11:15:26 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:29 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:29 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:29 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:31 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:31 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:31 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:34 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:34 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:34 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:53 debian systemd-timesyncd[1544]: interval/delta/delay/jitter/drift 32s/+17.573s/0.037s/6.229s/+66ppm
May  1 11:15:53 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:54 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:54 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:56 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:56 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:56 debian pve-ha-lrm[2278]: ipcc_send_rec failed: Connection refused
May  1 11:15:59 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:59 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused
May  1 11:15:59 debian pve-ha-crm[2255]: ipcc_send_rec failed: Connection refused

Code:
root@debian:/# tail -n 20 /var/log/messages
May  1 10:45:57 debian kernel: [   38.470716] RPC: Registered udp transport module.
May  1 10:45:57 debian kernel: [   38.470718] RPC: Registered tcp transport module.
May  1 10:45:57 debian kernel: [   38.470720] RPC: Registered tcp NFSv4.1 backchannel transport module.
May  1 10:45:57 debian kernel: [   38.741325] FS-Cache: Loaded
May  1 10:45:57 debian kernel: [   39.031903] FS-Cache: Netfs 'nfs' registered for caching
May  1 10:45:57 debian kernel: [   39.453051] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
May  1 10:45:57 debian kernel: [   40.102825] audit: type=1400 audit(1462095951.640:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=1902 comm="apparmor_parser"
May  1 10:45:57 debian kernel: [   40.102890] audit: type=1400 audit(1462095951.640:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=1902 comm="apparmor_parser"
May  1 10:45:57 debian kernel: [   40.102898] audit: type=1400 audit(1462095951.640:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=1902 comm="apparmor_parser"
May  1 10:45:57 debian kernel: [   40.117097] audit: type=1400 audit(1462095951.656:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1902 comm="apparmor_parser"
May  1 10:45:57 debian kernel: [   42.760905] ip_tables: (C) 2000-2006 Netfilter Core Team
May  1 10:45:57 debian kernel: [   43.153938] softdog: Software Watchdog Timer: 0.08 initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)
May  1 10:45:57 debian kernel: [   43.341770] cgroup: new mount options do not match the existing superblock, will be ignored
May  1 10:45:57 debian kernel: [   43.838557] cgroup: new mount options do not match the existing superblock, will be ignored
May  1 10:45:58 debian kernel: [   47.044217] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
May  1 10:45:58 debian kernel: [   47.045809] NFSD: starting 90-second grace period (net ffffffff81f05900)
May  1 10:46:15 debian rsyslogd-2007: action 'action 17' suspended, next retry is Sun May  1 10:46:45 2016 [try http://www.rsyslog.com/e/2007 ]
May  1 10:46:16 debian kernel: [   63.757778] ip6_tables: (C) 2000-2006 Netfilter Core Team
May  1 10:46:17 debian kernel: [   64.040713] ip_set: protocol 6
May  1 10:46:48 debian pve-manager[2280]: <root@pam> starting task UPID:debian:000008F0:00002549:5725D088:startall::root@pam:

It seems to me that somehow the pvecluster service is not running, but since this is just a standalone node is that necessary in the first place?

Thank you,
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!