Hey there,
I am completely desperate. After a power outage my Proxmox Server is no longer working. I´m searching for like 2 weeks but can´t figure out what is causing all of the problems.
systemctl --failed
systemctl status pve-cluster
systemctl status pveproxy
I don´t know how important that is but
zpool status outputs
All other failed Services output:
I don´t have a cluster. It´s just a single NUC running for a few months.
lsblk shows
This seems to represent all VM´s i had.
Is there anyway of getting this System back to Life? If not will there be a way of Backing up VM´s as i can´t reach the GUI and vzdump doesn´t work because of the Connection refused error ?
Thanks in advance
I am completely desperate. After a power outage my Proxmox Server is no longer working. I´m searching for like 2 weeks but can´t figure out what is causing all of the problems.
systemctl --failed
Bash:
UNIT LOAD ACTIVE SUB DESCRIPTION
● pve-cluster.service loaded failed failed The Proxmox VE cluster filesystem
● pve-guests.service loaded failed failed PVE guests
● pve-ha-crm.service loaded failed failed PVE Cluster HA Resource Manager Daemon
● pve-ha-lrm.service loaded failed failed PVE Local HA Resource Manager Daemon
● pvesr.service loaded failed failed Proxmox VE replication runner
● pvestatd.service loaded failed failed PVE Status Daemon
systemctl status pve-cluster
Code:
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Sat 2021-02-06 21:27:26 CET; 20min ago
Feb 06 21:27:26 pvee systemd[1]: pve-cluster.service: Service RestartSec=100ms expired, scheduling restart.
Feb 06 21:27:26 pvee systemd[1]: pve-cluster.service: Scheduled restart job, restart counter is at 5.
Feb 06 21:27:26 pvee systemd[1]: Stopped The Proxmox VE cluster filesystem.
Feb 06 21:27:26 pvee systemd[1]: pve-cluster.service: Start request repeated too quickly.
Feb 06 21:27:26 pvee systemd[1]: pve-cluster.service: Failed with result 'signal'.
Feb 06 21:27:26 pvee systemd[1]: Failed to start The Proxmox VE cluster filesystem.
systemctl status pveproxy
Code:
● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2021-02-06 21:18:22 CET; 30min ago
Main PID: 1109 (pveproxy)
Tasks: 4 (limit: 4915)
Memory: 131.9M
CGroup: /system.slice/pveproxy.service
├─1109 pveproxy
├─2684 pveproxy worker
├─2685 pveproxy worker
└─2686 pveproxy worker
Feb 06 21:49:18 pvee pveproxy[1109]: worker 2676 finished
Feb 06 21:49:18 pvee pveproxy[1109]: worker 2684 started
Feb 06 21:49:18 pvee pveproxy[2684]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1775.
Feb 06 21:49:18 pvee pveproxy[2677]: worker exit
Feb 06 21:49:18 pvee pveproxy[1109]: worker 2677 finished
Feb 06 21:49:18 pvee pveproxy[1109]: starting 2 worker(s)
Feb 06 21:49:18 pvee pveproxy[1109]: worker 2685 started
Feb 06 21:49:18 pvee pveproxy[1109]: worker 2686 started
Feb 06 21:49:18 pvee pveproxy[2685]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1775.
Feb 06 21:49:18 pvee pveproxy[2686]: /etc/pve/local/pve-ssl.key: failed to load local private key (key_file or key) at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1775.
I don´t know how important that is but
zpool status outputs
no pools available
All other failed Services output:
Code:
ipcc_send_rec[1] failed: Connection refused
ipcc_send_rec[2] failed: Connection refused
ipcc_send_rec[3] failed: Connection refused
Unable to load access control list: Connection refused
I don´t have a cluster. It´s just a single NUC running for a few months.
lsblk shows
Code:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 238.5G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 238G 0 part
├─pve-swap 253:0 0 7G 0 lvm [SWAP]
├─pve-root 253:1 0 59.3G 0 lvm /
├─pve-data_tmeta 253:2 0 1.6G 0 lvm
│ └─pve-data-tpool 253:4 0 152.6G 0 lvm
│ ├─pve-data 253:5 0 152.6G 0 lvm
│ ├─pve-vm--100--disk--0 253:6 0 50G 0 lvm
│ ├─pve-vm--101--disk--0 253:7 0 10G 0 lvm
│ ├─pve-vm--102--disk--0 253:8 0 10G 0 lvm
│ ├─pve-vm--103--disk--0 253:9 0 30G 0 lvm
│ ├─pve-vm--105--disk--0 253:10 0 20G 0 lvm
│ ├─pve-vm--104--disk--0 253:11 0 20G 0 lvm
│ ├─pve-vm--106--disk--0 253:12 0 20G 0 lvm
│ ├─pve-vm--107--disk--0 253:13 0 10G 0 lvm
│ ├─pve-vm--111--disk--0 253:14 0 4M 0 lvm
│ ├─pve-vm--111--disk--1 253:15 0 36G 0 lvm
│ ├─pve-vm--113--disk--0 253:16 0 20G 0 lvm
│ └─pve-vm--108--disk--0 253:17 0 8G 0 lvm
└─pve-data_tdata 253:3 0 152.6G 0 lvm
└─pve-data-tpool 253:4 0 152.6G 0 lvm
├─pve-data 253:5 0 152.6G 0 lvm
├─pve-vm--100--disk--0 253:6 0 50G 0 lvm
├─pve-vm--101--disk--0 253:7 0 10G 0 lvm
├─pve-vm--102--disk--0 253:8 0 10G 0 lvm
├─pve-vm--103--disk--0 253:9 0 30G 0 lvm
├─pve-vm--105--disk--0 253:10 0 20G 0 lvm
├─pve-vm--104--disk--0 253:11 0 20G 0 lvm
├─pve-vm--106--disk--0 253:12 0 20G 0 lvm
├─pve-vm--107--disk--0 253:13 0 10G 0 lvm
├─pve-vm--111--disk--0 253:14 0 4M 0 lvm
├─pve-vm--111--disk--1 253:15 0 36G 0 lvm
├─pve-vm--113--disk--0 253:16 0 20G 0 lvm
└─pve-vm--108--disk--0 253:17 0 8G 0 lvm
This seems to represent all VM´s i had.
Is there anyway of getting this System back to Life? If not will there be a way of Backing up VM´s as i can´t reach the GUI and vzdump doesn´t work because of the Connection refused error ?
Thanks in advance