[SOLVED] Can't Log in to GUI - "Login failed. Please try again"

Panik

New Member
Oct 18, 2022
8
0
1
Hi, I'm getting an error "Login failed. Please try again".
I am able to log in via ssh

Presumed cause. I use an external HD to backup my vm's. I unplugged then plugged back in. I presume it id not remount. I then noticed the local-lve drive was full. I presume the automatic backups were happening and filling up that drive. I decided to reboot the server and figured the USB drive would re-mount. But now after reboot, I can't login to the GUI and get the above error.

When I run systemctl status pveproxy.service

I see a "no space left on device" error. See output below. Any help for a newb greatly appreciated.


● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-10-18 14:57:44 MDT; 12min ago
Process: 5286 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=1/FAILURE)
Process: 5288 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
Main PID: 5289 (pveproxy)
Tasks: 4 (limit: 115818)
Memory: 138.6M
CPU: 5.141s
CGroup: /system.slice/pveproxy.service
├─5289 pveproxy
├─6688 pveproxy worker
├─6689 pveproxy worker
└─6713 pveproxy worker

Oct 18 15:06:43 panik pveproxy[6663]: worker exit
Oct 18 15:06:43 panik pveproxy[6667]: worker exit
Oct 18 15:06:43 panik pveproxy[5289]: worker 6667 finished
Oct 18 15:06:43 panik pveproxy[5289]: starting 1 worker(s)
Oct 18 15:06:43 panik pveproxy[5289]: worker 6663 finished
Oct 18 15:06:43 panik pveproxy[5289]: worker 6689 started
Oct 18 15:06:48 panik pveproxy[5289]: starting 1 worker(s)
Oct 18 15:06:48 panik pveproxy[5289]: worker 6713 started
Oct 18 15:06:52 panik pveproxy[6688]: Warning: unable to close filehandle GEN5 properly: No space left on device at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1809.
Oct 18 15:06:52 panik pveproxy[6688]: error writing access log
 
You might also want to unmount that USB HDD. In case your "local" is full because backups were written to the unmounted mountpoint and you then mount that USB HDD again at the same mountpoint, then you can't access these backups as the mountpoint will point to the USB HDD.
In case your mount your USB HDD using systemd or fstab and use a directory storage to store your backups you should check to enable the "is_mountpint" option of the directory storage. This would prevent PVE from filling up your "local" in the future when mounting the USB HDDs fails.
 
  • Like
Reactions: Panik
You should read beyond first comment. There are many examples of using tools that come standard in any Linux installation, such as "du"...


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes, I did read further used du and df to help me zero in, but I was still having trouble. Thanks for giving ideas! I did in fact figure out where I could remove some of the files clogging up the works from another post. I needed to unmount the usb drive.
 
You might also want to unmount that USB HDD. In case your "local" is full because backups were written to the unmounted mountpoint and you then mount that USB HDD again at the same mountpoint, then you can't access these backups as the mountpoint will point to the USB HDD.
In case your mount your USB HDD using systemd or fstab and use a directory storage to store your backups you should check to enable the "is_mountpint" option of the directory storage. This would prevent PVE from filling up your "local" in the future when mounting the USB HDDs fails.
This was the ticket! Once I unmounted the USB drive, I was able to see the backups that were being stored at the mount point. Deleting several redundant backups got me where I needed to go. Sadly, I was so frustrated in the beginning, that I thought that was what I was doing at first, but I deleted all the backups on my usb drive, yet still had full drive......... Once I unmounted, then I saw the other backups. Now I have a new problem with several containers bein locked "snapshot lock" that I need to figure out.
 
For those that may run into this problem. I wanted to give an overview and solution that worked for me. Although it was a stressful situation thinking I may have lost my system. Thanks to some of the clues from @bbgeek17 and @Dunuin I was able to solve it and get back up and running.

Root Cause:
- Unplugged USB drive and lost mount. I simply plugged it back in, but did not run mount -a
- Since the drive was not mounted, the nightly backups were dumping to the mount point but on the root folder
- After rebooting the system, the usb drive was then mounted. This created a problem with finding the files that were filling up the drive
- They appeared to be at the mount point, but in my ignorance, I was deleting files with no effect (as I was actually now deleting the files from the USB Drive.
- It was not until I read @Dunuin's post that it registered.
- I unmounted the drive then checked the mount point location. That's where the files were hiding. I moved the entire contents to a new folder (as it had valuable backups which I didn't want to lose), remounted the usb-drive, then moved the contents from the new folder back to the usb-drive
- Presto Change-o, the drive is back to what it should be, the backups show up as they should and the system is not behaving normally

- oh, one item that came up was 3 of my containers were locked. This was an interesting challenge. I was not able to unlock them until I did the above. After which I simply ran pct unlock 111 111 being the container id. And dis the same for the subsequent containers and the started them. All is working as normal now.

Key commands that helped:

df -h
du -h
 
Last edited:
This was the ticket! Once I unmounted the USB drive, I was able to see the backups that were being stored at the mount point. Deleting several redundant backups got me where I needed to go. Sadly, I was so frustrated in the beginning, that I thought that was what I was doing at first, but I deleted all the backups on my usb drive, yet still had full drive......... Once I unmounted, then I saw the other backups. Now I have a new problem with several containers bein locked "snapshot lock" that I need to figure out.
Run pvesm set --is_mountpoint yes StorageIDofYourManuallyMountedDirectroyStorage in CLI so this won't happen again in the future. Then the backups will just fail when the USB HDDs isn't mounted instead of filling up your root filesystem until nothing is working.
 
Run pvesm set --is_mountpoint yes StorageIDofYourManuallyMountedDirectroyStorage in CLI so this won't happen again in the future. Then the backups will just fail when the USB HDDs isn't mounted instead of filling up your root filesystem until nothing is working.
Thanks @Dunuin, I think I may have solved this problem differently. But not sure if it is wise, if you think I'm nuts, let me know. I eliminated the external usb drive for backups. I setup a TrueNAS vm on this proxmox server. I simply created an NFS share on the TrueNAS server and set my backups to that.
I don't know if that is a good idea or not.....
 
Thanks @Dunuin, I think I may have solved this problem differently. But not sure if it is wise, if you think I'm nuts, let me know. I eliminated the external usb drive for backups. I setup a TrueNAS vm on this proxmox server. I simply created an NFS share on the TrueNAS server and set my backups to that.
I don't know if that is a good idea or not.....
That depends on how you mount that NFS share. If you still mount that NFS share manually using the "mount" command or by using fstab/systemd this won't help at all, as then a failed mount of the NFS share would fill up your root filesystem. But I guess when use added a NFS share using the webUI this should be fine.
But still way to overcomplicated your solution, when all you would have to do is to run a single command once ;)