Trouble with rebooting and WebGUI

DrillSgtErnst

Active Member
Jun 29, 2020
91
6
28
Soo I do have one single Host PVE
running 6.2-9
Bash:
root@pve:~# pveversion
pve-manager/6.2-9/4d363c5b (running kernel: 5.4.44-2-pve)

Soo my first problem is, I can't reboot the server, cause it will hang on
Bash:
[FAILED] Failed deactivating swap /dev/pve/swap
[***   ] (1 of 5) **I could not get No. 1**
[ ***  ] (2 of 5) A stop Job is running for /dev/disk/by-uuid/96... (2min 43s / no limit)
[  *** ] (3 of 5) A stop Job is running for /dev/dm-0 (2min 43s / no limit)
[  *** ] (4 of 5) A stop Job is running for /dev/disk/by-id/dm-name-pve-swap (2min 43s / no limit)
[   ***] (5 of 5) A stop Job is running for /dev/mapper/pve-swap (2min 43s / no limit)

This will do for 60 Minutes. So I have to cold boot the machine.


The second problem is, I can't access the GUI anymore, SSH is fine.
I was working on the storage Tabs, and while adding the GUI disappeared.
I read quite some reports of that problem, but none did help me.

Bash:
root@pve:~# systemctl restart pveproxy
Job for pveproxy.service failed because the control process exited with error code.
Bash:
root@pve:~# systemctl status pveproxy.service
● pveproxy.service - PVE API Proxy Server
   Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset:
   Active: failed (Result: exit-code) since Fri 2020-07-24 15:06:16 CEST; 11min
  Process: 22277 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited,
  Process: 22289 ExecStart=/usr/bin/pveproxy start (code=exited, status=255/EXCE

Jul 24 15:06:16 pve systemd[1]: pveproxy.service: Service RestartSec=100ms expir
Jul 24 15:06:16 pve systemd[1]: pveproxy.service: Scheduled restart job, restart
Jul 24 15:06:16 pve systemd[1]: Stopped PVE API Proxy Server.
Jul 24 15:06:16 pve systemd[1]: pveproxy.service: Start request repeated too qui
Jul 24 15:06:16 pve systemd[1]: pveproxy.service: Failed with result 'exit-code'
Jul 24 15:06:16 pve systemd[1]: Failed to start PVE API Proxy Server.
Bash:
juornalctl -xe
-- Automatic restarting of the unit pveproxy.service has been scheduled, as the
-- the configured Restart= setting for the unit.
Jul 24 15:20:20 pve systemd[1]: Stopped PVE API Proxy Server.
-- Subject: A stop job for unit pveproxy.service has finished
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A stop job for unit pveproxy.service has finished.
--
-- The job identifier is 1158721 and the job result is done.
Jul 24 15:20:20 pve systemd[1]: Starting PVE API Proxy Server...
-- Subject: A start job for unit pveproxy.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pveproxy.service has begun execution.
--
-- The job identifier is 1158721.

My network is configured
Bash:
root@pve:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno4 inet manual

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2 eno3 eno4
        bond-miimon 100
        bond-mode balance-alb

auto vmbr0
iface vmbr0 inet static
        address  172.20.15.254
        netmask  24
        gateway  172.20.15.210
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

I ran apt update & apt dist-upgrade
and pveversion now says
Bash:
root@pve:~# pveversion
pve-manager/6.2-10/a20769ed (running kernel: 5.4.44-2-pve)
root@pve:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-1
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-11
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-11
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-10
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

any help is welcome
 
Well… what did you actually do in the storage tab? Looks a bit like you have severely damaged your boot device. I am operating approx. 18 PVE server and none had ever such an issue. I'd say, as long as you don't mess with the boot drive, such things couldn't happen, unless your boot device suffers some critical error, which would be quite a coincidence…
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!