Need deamon restart to access the WebGUI

I found another strange thing.

lsblk shows:
sda 8:0 1 7,3T 0 disk
├─sda1 8:1 1 1M 0 part
├─sda2 8:2 1 977M 0 part /boot
└─sda3 8:3 1 7,3T 0 part
├─pve-root 253:0 0 190,8G 0 lvm /
├─pve-swap 253:1 0 8192M 0 lvm [SWAP]
└─pve-data 253:3 0 7,1T 0 lvm /var/lib/vz


df -lh shwos:
udev 63G 0 63G 0% /dev
tmpfs 13G 1,3G 12G 11% /run
/dev/mapper/pve-root 187G 14G 164G 8% /
tmpfs 63G 63M 63G 1% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda2 946M 351M 531M 40% /boot
/dev/mapper/pve-data 7,1T 962G 5,8T 15% /var/lib/vz


I have checked the HDDs, there are fine.
I seems like i have only 1 TB HDD in use, but pvs shows:
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 7,27t 0

And when i list the physical volume it shows with pvdisplay:
--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 7,27 TiB / not usable 4,00 MiB
Allocatable yes (but full)

df -lh shows enough free space, but pvdisplay does not.
What's this? Could it be that the error is caused by the supposedly non-existent memory? But the monitoring also found no problem. Why does pvdisplay show that the memory is empty?
 
All right, I have to take a closer look.
At the moment I understood it the same way, that everything is OK. Nevertheless, I did not want to omit any "find" here :)

The problem is still there. What I find curious is that I can now access the VMs via the GUI and via the console, but the CTs and the storages are still provided with the question mark.
 
Could you post your storage Configs here? Maybe there is a typo or other problem visible.

Did you tried it already in another browser?
 
Could you post your storage Configs here? Maybe there is a typo or other problem visible.
What exactly do you need?

Did you tried it already in another browser?
Yes, i have already tried this.


It seems that this problem is related to a backup job. The problem occurred at 3 o'clock at night. Exactly at the time when a backup job of a container should be running / running.
I have 2 HDDs running with an ext4 file system in RAID1. Have already performed an unmount, unfortunately, the problem is still present.
 
Did you already reboot the server?
No. It is productive.
But the server itself is running fine.


/etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,vztmpl,rootdir,images
maxfiles 5
shared 0

dir: ssd
path /ssd_lvm
content iso,vztmpl,rootdir,images
shared 0

dir: backup_storage
path /backup_storage
content backup
maxfiles 100
shared 0

Every VM shows this graph (related to the timestamp)
So from this time the connection get lost i think.
proxmox.PNG
 
I already restarted the services as i mentioned.

The problem is still there.

service pvedaemon restart
service pveproxy restart
service pvestatd restart

Then i can browse a while throug the interface. KVM is visible but lxc has a question mark on the icon in the server list.
 
It seems like the node is running as cluster configuration.
How can i set up / change to single node?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!