Hi good ppl.
I can't login using web interface, but can login using ssh.
The story.
After updating one proxmox node three days ago i have rebooted it to activate new kernel. To not shutdown all vm's i used hybernate. After node restarting, all was fine, but today i noticed that i can't login using web. I logged using ssh and found that zfs pool for backups was lost, and proxmox tried to backup all vm's to root partition. So root partition was full. I cleaned root partition from backups and tryed to import and mount zfs backup pool. Import was successful but mounting not, due to not empty directory error. I removed backup pool mount directory but it was autocreated. So i used:
rm -rf /bkp_pool && zfs mount bkp_pool && rm-rf /bkp_pool/bkpdata && zfs mount bkp_pool/bkpdata
All other two zfs pool for vm's - ssd_pool and hdd_pool was fine. All vm's still running.
So the strage was restored. I restarted pve-proxy service, but still i cant login using web. In log i see:
Sep 30 10:32:12 node1 pve-ha-lrm[19321]: unable to write lrm status file - unable to open file '/etc/pve/nodes/node1/lrm_status.tmp.19321' - Input/output error
Sep 30 10:32:17 node1 pvestatd[6832]: authkey rotation error: error during cfs-locked 'authkey' operation: got lock request timeout
I restarted pve-ha-lrm - still no luck.
So i have two questions:
1. what can i do to recover web-login without node restart?
2. why zfs pool was exported?
3. what service creates directory for zfs pool, that prevents to mount zfs pool with error - "directory is not empty"
Thx.
I can't login using web interface, but can login using ssh.
The story.
After updating one proxmox node three days ago i have rebooted it to activate new kernel. To not shutdown all vm's i used hybernate. After node restarting, all was fine, but today i noticed that i can't login using web. I logged using ssh and found that zfs pool for backups was lost, and proxmox tried to backup all vm's to root partition. So root partition was full. I cleaned root partition from backups and tryed to import and mount zfs backup pool. Import was successful but mounting not, due to not empty directory error. I removed backup pool mount directory but it was autocreated. So i used:
rm -rf /bkp_pool && zfs mount bkp_pool && rm-rf /bkp_pool/bkpdata && zfs mount bkp_pool/bkpdata
All other two zfs pool for vm's - ssd_pool and hdd_pool was fine. All vm's still running.
So the strage was restored. I restarted pve-proxy service, but still i cant login using web. In log i see:
Sep 30 10:32:12 node1 pve-ha-lrm[19321]: unable to write lrm status file - unable to open file '/etc/pve/nodes/node1/lrm_status.tmp.19321' - Input/output error
Sep 30 10:32:17 node1 pvestatd[6832]: authkey rotation error: error during cfs-locked 'authkey' operation: got lock request timeout
I restarted pve-ha-lrm - still no luck.
So i have two questions:
1. what can i do to recover web-login without node restart?
2. why zfs pool was exported?
3. what service creates directory for zfs pool, that prevents to mount zfs pool with error - "directory is not empty"
Thx.