Login failed. Please try again

Basem.Gamal

New Member
Nov 22, 2022
12
0
1
Hello I'm not a pro at proxmox I just need your support as I'm trying to log in to the GUI web interface(Linux PAM) but keep saying login failed, however, I can able to access the server using SSH, I tried following more than one article here but was a dead end, I'm using one node,Please find the below

Code:
tail -n 50 /var/log/syslog

Nov 21 22:45:00 pve pvedaemon[1112]: authentication failure; rhost=::ffff:xx.xx.xx.xx user=root@pam msg=cfs-lock 'authkey' error: got lock request timeout

Code:
journalctl -r

Nov 21 22:50:53 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:51 pve pvestatd[1082]: status update time (9.228 seconds)
Nov 21 22:50:51 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 21 22:50:48 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:43 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:41 pve pvestatd[1082]: status update time (9.285 seconds)
Nov 21 22:50:41 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 21 22:50:38 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:33 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:31 pve pvestatd[1082]: status update time (9.306 seconds)
Nov 21 22:50:31 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 21 22:50:28 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:23 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:22 pve pvestatd[1082]: status update time (9.247 seconds)
Nov 21 22:50:22 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 21 22:50:18 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:13 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:11 pve pvestatd[1082]: status update time (9.257 seconds)
Nov 21 22:50:11 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 21 22:50:10 pve pvescheduler[180122]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
Nov 21 22:50:08 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:03 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 21 22:50:01 pve pvescheduler[180121]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 21 22:50:01 pve pvestatd[1082]: status update time (9.263 seconds)

Code:
pveversion -v


proxmox-ve: 7.2-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.2-14 (running version: 7.2-14/65898fbc)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.11: 7.0-10
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
ceph-fuse: 15.2.14-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-8
libpve-guest-common-perl: 4.2-2
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.2-12
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.7-1
proxmox-backup-file-restore: 2.2.7-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.2
pve-cluster: 7.2-3
pve-container: 4.3-6
pve-docs: 7.2-3
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.1.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-11
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
Code:
pvecm status

Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?

Code:
qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID
       101 Binary-Waves2        running    16384            500.00 34692
       103 BWcontentS1          running    16384            500.00 180879
       105 Adv                  stopped    4096             150.00 0
 
Hi,

Does the HA running in your node?

Can you please provide us with the output of pve-cluster status?

Bash:
systemctl status pve-cluster.service
 
Hi,

Does the HA running in your node?

Can you please provide us with the output of pve-cluster status?

Bash:
systemctl status pve-cluster.service
I'm not sure If my nodes are configured with high availability or not, Is there anything to do to check it?

Code:
systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-11-21 22:42:16 HST; 5h 56min ago
    Process: 179020 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
   Main PID: 179022 (pmxcfs)
      Tasks: 7 (limit: 57840)
     Memory: 30.1M
        CPU: 12.281s
     CGroup: /system.slice/pve-cluster.service
             └─179022 /usr/bin/pmxcfs

Nov 21 22:42:15 pve systemd[1]: Starting The Proxmox VE cluster filesystem...
Nov 21 22:42:16 pve systemd[1]: Started The Proxmox VE cluster filesystem.
Nov 21 22:42:18 pve pmxcfs[179022]: [database] crit: sqlite3_step failed: database disk image is malformed#010
 
Hi,

Code:
Nov 21 22:42:18 pve pmxcfs[179022]: [database] crit: sqlite3_step failed: database disk image is malformed#010
I would try the following:
Bash:
systemctl stop pve-cluster
mv /var/lib/pve-cluster/.pmxcfs.lockfile /root/pmxcfs.lockfile-bak
systemctl start pve-cluster


If that not help, please run journalctl -f and try to log in to PVE and provide us with full output of journalctl -f command
 
  • Like
Reactions: damaged_menu
Hi,


I would try the following:
Bash:
systemctl stop pve-cluster
mv /var/lib/pve-cluster/.pmxcfs.lockfile /root/pmxcfs.lockfile-bak
systemctl start pve-cluster


If that not help, please run journalctl -f and try to log in to PVE and provide us with full output of journalctl -f command
I have run the below commands but didn't helped then I run journalctl -f

systemctl stop pve-cluster
mv /var/lib/pve-cluster/.pmxcfs.lockfile /root/pmxcfs.lockfile-bak
systemctl start pve-cluster


Code:
journalctl -f
-- Journal begins at Thu 2022-11-17 03:31:53 HST. --
Nov 22 05:35:53 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 22 05:35:53 pve pvestatd[1082]: status update time (9.276 seconds)
Nov 22 05:35:53 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 22 05:35:56 pve sshd[239299]: Invalid user django from 197.5.145.121 port 9906
Nov 22 05:35:56 pve sshd[239299]: pam_unix(sshd:auth): check pass; user unknown
Nov 22 05:35:56 pve sshd[239299]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=197.5.145.121
Nov 22 05:35:58 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 22 05:35:58 pve sshd[239299]: Failed password for invalid user django from 197.5.145.121 port 9906 ssh2
Nov 22 05:35:59 pve sshd[239299]: Received disconnect from 197.5.145.121 port 9906:11: Bye Bye [preauth]
Nov 22 05:35:59 pve sshd[239299]: Disconnected from invalid user django 197.5.145.121 port 9906 [preauth]
Nov 22 05:36:01 pve cron[1074]: (*system*vzdump) RELOAD (/etc/cron.d/vzdump)
Nov 22 05:36:02 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 22 05:36:02 pve pvestatd[1082]: status update time (9.268 seconds)
Nov 22 05:36:03 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 22 05:36:05 pve pvescheduler[239325]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 22 05:36:08 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 22 05:36:12 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 22 05:36:12 pve pvestatd[1082]: status update time (9.255 seconds)
Nov 22 05:36:13 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 22 05:36:14 pve pvescheduler[239326]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
Nov 22 05:36:18 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
Nov 22 05:36:18 pve pvedaemon[1112]: authentication failure; rhost=::ffff:196.218.95.125 user=root@pam msg=cfs-lock 'authkey' error: got lock request timeout
Nov 22 05:36:22 pve pvestatd[1082]: authkey rotation error: cfs-lock 'authkey' error: got lock request timeout
Nov 22 05:36:22 pve pvestatd[1082]: status update time (9.262 seconds)
Nov 22 05:36:23 pve pve-ha-lrm[1128]: unable to write lrm status file - unable to delete old temp file: Input/output error
 
Thank you for the output!

Are you able to write something on /etc/pve/ path, e.g., touch /etc/pve/foo?
Have you tried to update the certificate pvecm updatecerts -f?
 
Thank you for the output!

Are you able to write something on /etc/pve/ path, e.g., touch /etc/pve/foo?
Have you tried to update the certificate pvecm updatecerts -f?
NO I can't write the file even the certificate Here is the output from writing in /etc/pve

Code:
root@pve:/# touch /etc/pve/foo
touch: cannot touch '/etc/pve/foo': Input/output error


root@pve:/# pvecm updatecerts -f
(re)generate node files
generate new node certificate
Signature ok
subject=OU = PVE Cluster Node, O = Proxmox Virtual Environment, CN = pve.binarywaves.com
Can't open /etc/pve/nodes/pve/pve-ssl.pem for writing, Input/output error
140412130761600:error:02001005:system library:fopen:Input/output error:../crypto/bio/bss_file.c:69:fopen('/etc/pve/nodes/pve/pve-ssl.pem','w')
140412130761600:error:2006D002:BIO routines:BIO_new_file:system lib:../crypto/bio/bss_file.c:78:
unable to generate pve ssl certificate:
command 'faketime yesterday openssl x509 -req -in /tmp/pvecertreq-248154.tmp -days 730 -out /etc/pve/nodes/pve/pve-ssl.pem -CAkey /etc/pve/priv/pve-root-ca.key -CA /etc/pve/pve-root-ca.pem -CAserial /etc/pve/priv/pve-root-ca.srl -extfile /tmp/pvesslconf-248154.tmp' failed: exit code 1
 
Hello,

Thank you for the output!

Can you please try to restart the pveproxy and pve-cluster services and then try to log in (you should refresh for the PVE URI after restarting the services):

Bash:
systemctl restart pveproxy.service
systemctl restart pve-cluster.service


Code:
Nov 21 22:42:18 pve pmxcfs[179022]: [database] crit: sqlite3_step failed: database disk image is malformed#010
The issue looks like it is from the "config.db", for that I wanna ask you if you edit the database in `/var/lib/pve-cluster/config.db`?
 
Last edited:
Hello,

Thank you for the output!

Can you please try to restart the pveproxy and pve-cluster services and then try to log in (you should refresh for the PVE URI after restarting the services):

Bash:
systemctl restart pveproxy.service
systemctl restart pve-cluster.service



The issue looks like it is from the "config.db", for that I wanna ask you if you edit the database in `/var/lib/pve-cluster/config.db`?
I have restarted both services
Code:
root@pve:/# systemctl restart pveproxy.service
root@pve:/# systemctl status pveproxy.service
● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-11-23 00:39:20 HST; 11s ago
    Process: 401486 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=1/FAILURE)
    Process: 401488 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS)
   Main PID: 401506 (pveproxy)
      Tasks: 4 (limit: 57840)
     Memory: 135.9M
        CPU: 2.251s
     CGroup: /system.slice/pveproxy.service
             ├─401506 pveproxy
             ├─401507 pveproxy worker
             ├─401508 pveproxy worker
             └─401509 pveproxy worker

Nov 23 00:39:18 pve systemd[1]: Starting PVE API Proxy Server...
Nov 23 00:39:18 pve pvecm[401486]: unable to open file '/etc/pve/priv/authorized_keys.tmp.401487' - Input/output error
Nov 23 00:39:20 pve pveproxy[401506]: starting server
Nov 23 00:39:20 pve pveproxy[401506]: starting 3 worker(s)
Nov 23 00:39:20 pve pveproxy[401506]: worker 401507 started
Nov 23 00:39:20 pve pveproxy[401506]: worker 401508 started
Nov 23 00:39:20 pve pveproxy[401506]: worker 401509 started
Nov 23 00:39:20 pve systemd[1]: Started PVE API Proxy Server.

Code:
root@pve:/# systemctl restart pve-cluster.service
root@pve:/# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-11-23 00:40:00 HST; 11s ago
    Process: 401587 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
   Main PID: 401588 (pmxcfs)
      Tasks: 5 (limit: 57840)
     Memory: 14.6M
        CPU: 46ms
     CGroup: /system.slice/pve-cluster.service
             └─401588 /usr/bin/pmxcfs

Nov 23 00:39:59 pve systemd[1]: Starting The Proxmox VE cluster filesystem...
Nov 23 00:40:00 pve systemd[1]: Started The Proxmox VE cluster filesystem.
Nov 23 00:40:01 pve pmxcfs[401588]: [database] crit: sqlite3_step failed: database disk image is malformed#010
 
Hello,

Can you please check if the config.db (/var/lib/pve-cluster/config.db) is OK doing the following command:
Bash:
sqlite3 /var/lib/pve-cluster/config.db 'PRAGMA integrity_check'
This maybe give us a hint if the database is already corrupted.
 
Last edited:
Hello,

Can you please check if the config.db (/var/lib/pve-cluster/config.db /root/config-bak.db) is OK doing the following command:
Bash:
sqlite3 /var/lib/pve-cluster/config.db 'PRAGMA integrity_check'
This maybe give us a hint if the database is already corrupted.
Please find the output below
Code:
root@pve:/# sqlite3 /var/lib/pve-cluster/config.db 'PRAGMA integrity_check'
*** in database main ***
Main freelist: freelist leaf count too big on page 9
Main freelist: invalid page number 218421504
On tree page 2 cell 0: 2nd reference to page 9
Page 8 is never used
Code:
root@pve:/# sqlite3 /root/config-bak.db 'PRAGMA integrity_check'
ok
 
root@pve:/# sqlite3 /var/lib/pve-cluster/config.db 'PRAGMA integrity_check' *** in database main *** Main freelist: freelist leaf count too big on page 9 Main freelist: invalid page number 218421504 On tree page 2 cell 0: 2nd reference to page 9 Page 8 is never used
Looks like the config.db is corrupted
Could you please try to re-install libsqlite3-0 and pve-cluster apt install --reinstall pve-cluster libsqlite3-0, then try to log in?

If that also not helps, please try to dump some data from the config.db database, but before, I would do a backup for the main db file.
Bash:
cp /var/lib/pve-cluster/config.db /root/config-bak.db
sqlite3 /root/config-bak.db "select * from tree where name='qemu-server';"
 
Looks like the config.db is corrupted
Could you please try to re-install libsqlite3-0 and pve-cluster apt install --reinstall pve-cluster libsqlite3-0, then try to log in?

If that also not helps, please try to dump some data from the config.db database, but before, I would do a backup for the main db file.
Bash:
cp /var/lib/pve-cluster/config.db /root/config-bak.db
sqlite3 /root/config-bak.db "select * from tree where name='qemu-server';"
After reinstalling the two packages I got the same error
Code:
root@pve:/# apt install --reinstall pve-cluster
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be upgraded:
  pve-cluster
1 upgraded, 0 newly installed, 0 to remove and 12 not upgraded.
Need to get 117 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 pve-cluster amd64 7.3-1 [117 kB]
Fetched 117 kB in 0s (334 kB/s)
Reading changelogs... Done
(Reading database ... 57915 files and directories currently installed.)
Preparing to unpack .../pve-cluster_7.3-1_amd64.deb ...
Unpacking pve-cluster (7.3-1) over (7.2-3) ...
Setting up pve-cluster (7.3-1) ...
Processing triggers for man-db (2.9.4-2) ...

root@pve:/# systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-11-23 04:39:18 HST; 23s ago
    Process: 436643 ExecStart=/usr/bin/pmxcfs (code=exited, status=0/SUCCESS)
   Main PID: 436644 (pmxcfs)
      Tasks: 5 (limit: 57840)
     Memory: 14.6M
        CPU: 47ms
     CGroup: /system.slice/pve-cluster.service
             └─436644 /usr/bin/pmxcfs

Nov 23 04:39:17 pve systemd[1]: Starting The Proxmox VE cluster filesystem...
Nov 23 04:39:18 pve systemd[1]: Started The Proxmox VE cluster filesystem.
Nov 23 04:39:18 pve pmxcfs[436644]: [database] crit: sqlite3_step failed: database disk image is malformed#010
root@pve:/# apt install libsqlite3-0
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
libsqlite3-0 is already the newest version (3.34.1-3).
0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded.
Code:
root@pve:/# apt install --reinstall libsqlite3-0
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 12 not upgraded.
Need to get 797 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://ftp.us.debian.org/debian bullseye/main amd64 libsqlite3-0 amd64 3.34.1-3 [797 kB]
Fetched 797 kB in 1s (1,030 kB/s)
(Reading database ... 57915 files and directories currently installed.)
Preparing to unpack .../libsqlite3-0_3.34.1-3_amd64.deb ...
Unpacking libsqlite3-0:amd64 (3.34.1-3) over (3.34.1-3) ...
Setting up libsqlite3-0:amd64 (3.34.1-3) ...
Processing triggers for libc-bin (2.31-13+deb11u5) ...

After taking backup from DB ,I tried to query some data and got the below
Is it mean my DB id oki or still corupted?

Code:
root@pve:/# sqlite3 /root/config-bak.db "select * from tree where name='qemu-server';"
14|12|14|0|1634641203|4|qemu-server|
 
Hello,

Thank you for the output!

Can you please try the following commands from this thread? Please consider making a backup of the config.db before doing anything in the config.db database. You also have to stop the pve-cluster before doing the commands.

Hopefully this will help
 
Hello,

Thank you for the output!

Can you please try the following commands from this thread? Please consider making a backup of the config.db before doing anything in the config.db database. You also have to stop the pve-cluster before doing the commands.

Hopefully this will help
Thanks a lot for your support, Finally I'm able to log in to the GUI web interface.
Unfortunately, when I entered the console of my running VMs to see what's going on, I got no bootable device, I have attached a screenshot with the error.

Untitled3.png
 
Glad to read that the login issue is fixed :)

Can you please provide us with VM 101 config (qm config 101)
 
Glad to read that the login issue is fixed :)

Can you please provide us with VM 101 config (qm config 101)
Please find the output of #qm config 101 below
Please note that the other VM 102 has the same issue

Code:
root@pve:/# qm config 101
boot: order=ide0;ide2;net0
cores: 2
ide0: local-lvm:vm-101-disk-0,size=500G
ide2: none,media=cdrom
machine: pc-i440fx-6.0
memory: 16384
name: Binary-Waves2
net0: e1000=F2:38:11:94:82:6C,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
parent: Before_SQL_installation
scsihw: virtio-scsi-pci
smbios1: uuid=6e5b7fc4-b525-4c6a-817b-780ae58e689c
sockets: 2
vmgenid: dec51500-77f5-4f17-adf8-66e17b12a36b
 
Can you please provide us with the pvesm status output as well?
Do you see any error in the syslog/journalctl?
 
Can you please provide us with the pvesm status output as well?
Do you see any error in the syslog/journalctl?
the output of pvesm

Code:
root@pve:/etc/pve# pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active        98497780        27583736        65864496   28.00%
local-lvm     lvmthin     active      4721209344        81204800      4640004543    1.72%

I found some errors in Syslog belonging to replication and pve console ( as I tried to console the pve but I got the below error) but tI think those errors are not related to the concerned issue(Bootable device) , Also I din't find errors in Journalctl

Code:
Nov 24 00:00:00 pve systemd[1]: Starting Rotate log files...
Nov 24 00:00:00 pve systemd[1]: Starting Daily man-db regeneration...
Nov 24 00:00:00 pve systemd[1]: Reloading PVE API Proxy Server.
Nov 24 00:00:00 pve systemd[1]: man-db.service: Succeeded.
Nov 24 00:00:00 pve systemd[1]: Finished Daily man-db regeneration.
Nov 24 00:00:01 pve pvescheduler[611760]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:00:02 pve pveproxy[611749]: send HUP to 401506
Nov 24 00:00:02 pve pveproxy[401506]: received signal HUP
Nov 24 00:00:02 pve pveproxy[401506]: server closing
Nov 24 00:00:02 pve pveproxy[401506]: server shutdown (restart)
Nov 24 00:00:02 pve systemd[1]: Reloaded PVE API Proxy Server.
Nov 24 00:00:02 pve systemd[1]: Reloading PVE SPICE Proxy Server.
Nov 24 00:00:02 pve spiceproxy[611769]: send HUP to 1126
Nov 24 00:00:02 pve spiceproxy[1126]: received signal HUP
Nov 24 00:00:02 pve spiceproxy[1126]: server closing
Nov 24 00:00:02 pve spiceproxy[1126]: server shutdown (restart)
Nov 24 00:00:02 pve systemd[1]: Reloaded PVE SPICE Proxy Server.
Nov 24 00:00:02 pve pvefw-logger[396062]: received terminate request (signal)
Nov 24 00:00:02 pve systemd[1]: Stopping Proxmox VE firewall logger...
Nov 24 00:00:02 pve pvefw-logger[396062]: stopping pvefw logger
Nov 24 00:00:03 pve systemd[1]: pvefw-logger.service: Succeeded.
Nov 24 00:00:03 pve systemd[1]: Stopped Proxmox VE firewall logger.
Nov 24 00:00:03 pve systemd[1]: pvefw-logger.service: Consumed 6.384s CPU time.
Nov 24 00:00:03 pve systemd[1]: Starting Proxmox VE firewall logger...
Nov 24 00:00:03 pve pvefw-logger[611806]: starting pvefw logger
Nov 24 00:00:03 pve systemd[1]: Started Proxmox VE firewall logger.
Nov 24 00:00:03 pve systemd[1]: logrotate.service: Succeeded.
Nov 24 00:00:03 pve systemd[1]: Finished Rotate log files.
Nov 24 00:00:03 pve spiceproxy[1126]: restarting server
Nov 24 00:00:03 pve spiceproxy[1126]: starting 1 worker(s)
Nov 24 00:00:03 pve spiceproxy[1126]: worker 611812 started
Nov 24 00:00:03 pve pveproxy[401506]: restarting server
Nov 24 00:00:03 pve pveproxy[401506]: starting 3 worker(s)
Nov 24 00:00:03 pve pveproxy[401506]: worker 611813 started
Nov 24 00:00:03 pve pveproxy[401506]: worker 611814 started
Nov 24 00:00:03 pve pveproxy[401506]: worker 611815 started
Nov 24 00:00:08 pve spiceproxy[396068]: worker exit
Nov 24 00:00:08 pve spiceproxy[1126]: worker 396068 finished
Nov 24 00:00:08 pve pveproxy[606626]: worker exit
Nov 24 00:00:08 pve pveproxy[606545]: worker exit
Nov 24 00:00:08 pve pveproxy[401506]: worker 606545 finished
Nov 24 00:00:08 pve pveproxy[401506]: worker 606626 finished
Nov 24 00:00:08 pve pveproxy[401506]: worker 606811 finished
Nov 24 00:00:12 pve pveproxy[611846]: got inotify poll request in wrong process - disabling inotify
Nov 24 00:00:12 pve pveproxy[611846]: worker exit
Nov 24 00:01:01 pve pvescheduler[612291]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:02:01 pve pvescheduler[612801]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:03:01 pve pvescheduler[613304]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:04:01 pve pvescheduler[613820]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:04:52 pve pvedaemon[1113]: <root@pam> successful auth for user 'root@pam'
Nov 24 00:05:01 pve pvescheduler[614327]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:06:01 pve pvescheduler[614843]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:07:01 pve pvescheduler[615358]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:08:01 pve pvescheduler[615868]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:09:01 pve pvescheduler[616386]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:10:01 pve pvescheduler[616657]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:11:01 pve pvescheduler[616802]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:12:01 pve pvescheduler[616962]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:13:01 pve pvescheduler[617111]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:14:01 pve pvescheduler[617263]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:15:01 pve pvescheduler[617415]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:16:01 pve pvescheduler[617560]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:17:01 pve CRON[617713]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Nov 24 00:17:01 pve pvescheduler[617715]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:18:01 pve pvescheduler[617862]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:19:01 pve pvescheduler[618014]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:19:52 pve pvedaemon[1113]: <root@pam> successful auth for user 'root@pam'
Nov 24 00:20:01 pve pvescheduler[618157]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:21:01 pve pvescheduler[618312]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:22:01 pve pvescheduler[618473]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:23:01 pve pvescheduler[619008]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:24:01 pve pvescheduler[619518]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:25:01 pve pvescheduler[620025]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_01] [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 124 to 123
Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_02] [SAT], 17 Currently unreadable (pending) sectors
Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_03] [SAT], 669 Currently unreadable (pending) sectors
Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_04] [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 124 to 123
Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_05] [SAT], 3 Currently unreadable (pending) sectors
Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_05] [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 123 to 122
Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_06] [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 123 to 122
Nov 24 00:26:01 pve pvescheduler[620529]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:27:01 pve pvescheduler[621050]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:27:09 pve pvedaemon[621115]: starting termproxy UPID:pve:00097A3B:01837A2E:637F46FD:vncshell::root@pam:
Nov 24 00:27:09 pve pvedaemon[1112]: <root@pam> starting task UPID:pve:00097A3B:01837A2E:637F46FD:vncshell::root@pam:
Nov 24 00:27:09 pve pvedaemon[621115]: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
Nov 24 00:27:09 pve pvedaemon[1112]: <root@pam> end task UPID:pve:00097A3B:01837A2E:637F46FD:vncshell::root@pam: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
Nov 24 00:27:15 pve pveproxy[611815]: proxy detected vanished client connection
Nov 24 00:28:01 pve pvescheduler[621562]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:29:01 pve pvescheduler[622073]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:30:01 pve pvescheduler[622577]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:31:01 pve pvescheduler[622911]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:31:15 pve systemd[1]: Started Session 85 of user root.
Nov 24 00:31:51 pve pvedaemon[1111]: <root@pam> starting task UPID:pve:000981C6:0183E831:637F4817:vncshell::root@pam:
Nov 24 00:31:51 pve pvedaemon[623046]: starting termproxy UPID:pve:000981C6:0183E831:637F4817:vncshell::root@pam:
Nov 24 00:31:51 pve pvedaemon[623046]: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
Nov 24 00:31:51 pve pvedaemon[1111]: <root@pam> end task UPID:pve:000981C6:0183E831:637F4817:vncshell::root@pam: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
Nov 24 00:32:01 pve pvescheduler[623083]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:32:08 pve pvedaemon[1112]: <root@pam> successful auth for user 'root@pam'
Nov 24 00:32:09 pve pvedaemon[1112]: <root@pam> starting task UPID:pve:00098204:0183EF58:637F4829:vncshell::root@pam:
Nov 24 00:32:09 pve pvedaemon[623108]: starting termproxy UPID:pve:00098204:0183EF58:637F4829:vncshell::root@pam:
Nov 24 00:32:09 pve pvedaemon[623108]: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
Nov 24 00:32:09 pve pvedaemon[1112]: <root@pam> end task UPID:pve:00098204:0183EF58:637F4829:vncshell::root@pam: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
Nov 24 00:32:53 pve pvedaemon[623242]: starting termproxy UPID:pve:0009828A:0184008C:637F4855:vncshell::root@pam:
Nov 24 00:32:53 pve pvedaemon[1112]: <root@pam> starting task UPID:pve:0009828A:0184008C:637F4855:vncshell::root@pam:
Nov 24 00:32:53 pve pvedaemon[623242]: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
Nov 24 00:32:53 pve pvedaemon[1112]: <root@pam> end task UPID:pve:0009828A:0184008C:637F4855:vncshell::root@pam: command '/usr/bin/termproxy 5900 --path /nodes/pve --perm Sys.Console -- /bin/login -f root' failed: exit code 1
Nov 24 00:33:01 pve pvescheduler[623260]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:34:01 pve pvescheduler[623535]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:35:01 pve pvescheduler[624060]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:36:01 pve pvescheduler[624605]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:37:01 pve pvescheduler[625110]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:38:01 pve pvescheduler[625626]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:39:01 pve pvescheduler[626125]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:40:01 pve pvescheduler[626636]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:41:01 pve pvescheduler[627145]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:42:01 pve pvescheduler[627667]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:43:01 pve pvescheduler[628176]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:44:02 pve pvescheduler[628681]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:45:01 pve pvescheduler[629192]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:46:01 pve pvescheduler[629689]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:47:01 pve pvescheduler[630199]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:47:09 pve pvedaemon[1112]: <root@pam> successful auth for user 'root@pam'
Nov 24 00:48:01 pve pvescheduler[630716]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Nov 24 00:49:01 pve pvescheduler[631260]: replication: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
 
Hi,

Thank you for the Syslog!

Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_01] [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 124 to 123 Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_02] [SAT], 17 Currently unreadable (pending) sectors Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_03] [SAT], 669 Currently unreadable (pending) sectors Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_04] [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 124 to 123 Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_05] [SAT], 3 Currently unreadable (pending) sectors Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_05] [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 123 to 122 Nov 24 00:25:10 pve smartd[875]: Device: /dev/bus/2 [megaraid_disk_06] [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 123 to 122
This indicates an issue with one of your disks.

Can you create a new VM/CT and see if bootable?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!