Hello.
I can’t log in to the WebUI of our proxmox via Linux PAM, but I connect via ssh with the same access. Only 1 server. Without a cluster.
Only one user - root.
I tried all the methods from similar topics on this forum - nothing helped
- reboot server
- Added a new user (admin@pve)
- changed root password
- there are no messages in /var/log/auth.log on failed login.
- restored sqlite3 /var/lib/pve-cluster/config.db
- checked smartctl
- restarted all services and the whole server
- journalctl reports "Input/output error"
I run commands:
maybe that helped.
https://forum.proxmox.com/threads/error-database-disk-image-is-malformed.33415/
I check file config.db. Was error - database disk image is malformed
Please give me advice on what to do.
I can’t log in to the WebUI of our proxmox via Linux PAM, but I connect via ssh with the same access. Only 1 server. Without a cluster.
Only one user - root.
I tried all the methods from similar topics on this forum - nothing helped
- reboot server
- Added a new user (admin@pve)
- changed root password
- there are no messages in /var/log/auth.log on failed login.
- restored sqlite3 /var/lib/pve-cluster/config.db
- checked smartctl
- restarted all services and the whole server
- journalctl reports "Input/output error"
Code:
Jun 10 00:44:14 prox00 pve-ha-lrm[1828]: unable to write lrm status file - unable to open file '/etc/pve/nodes/prox00/lrm_status.tmp.1828' - Input/output error
Jun 10 00:44:10 prox00 pvescheduler[487596]: replication: cfs-lock 'file-replication_cfg' error: got lock request timeout
Jun 10 00:44:10 prox00 pvescheduler[487597]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
...
Jun 09 23:43:58 prox00 pvestatd[1708]: Reverting to previous auth key
Jun 09 23:43:58 prox00 pvestatd[1708]: Failed to store new auth key - close (rename) atomic file '/etc/pve/authkey.pub' failed: Input/output error
Jun 09 23:43:58 prox00 pmxcfs[468357]: [database] crit: delete_inode failed: database disk image is malformed#010
Jun 09 23:43:58 prox00 pvestatd[1708]: auth key pair too old, rotating..
...
Jun 09 23:43:54 prox00 pvestatd[1708]: authkey rotation error: cfs-lock 'authkey' error: no quorum!
...
Jun 09 15:06:18 prox00 pveproxy[219340]: proxy detected vanished client connection
Code:
root@prox00 ~ # uname -a
Linux prox00 5.15.107-2-pve #1 SMP PVE 5.15.107-2 (2023-05-10T09:10Z) x86_64 GNU/Linux
root@prox00 ~ # pveperf
CPU BOGOMIPS: 217178.56
REGEX/SECOND: 4584998
HD SIZE: 3502.89 GB (/dev/md2)
BUFFERED READS: 2294.92 MB/sec
AVERAGE SEEK TIME: 0.04 ms
FSYNCS/SECOND: 10946.70
DNS EXT: 260.10 ms
root@prox00 ~ # pvecm status
Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?
proxmox-ve: 7.2-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-4 (running version: 7.4-4/4a8501a8)
pve-kernel-5.15: 7.4-3
pve-kernel-helper: 7.2-7
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.39-2-pve: 5.15.39-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.0
pve-cluster: 7.3-3
pve-container: 4.4-4
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
pve-manager: 7.4-4 (running version: 7.4-4/4a8501a8)
pve-kernel-5.15: 7.4-3
pve-kernel-helper: 7.2-7
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.39-2-pve: 5.15.39-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.0
pve-cluster: 7.3-3
pve-container: 4.4-4
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
Code:
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 1.2M 13G 1% /run
/dev/md2 3.5T 1.7T 1.6T 51% /
tmpfs 63G 31M 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/md1 989M 164M 775M 18% /boot
tmpfs 13G 0 13G 0% /run/user/0
/dev/fuse 128M 20K 128M 1% /etc/pve
I run commands:
Code:
systemctl stop pve-cluster
rm -f /var/lib/pve-cluster/.pmxcfs.lockfile
systemctl start pve-cluster
Code:
/var/lib/pve-cluster
├── [-rw-r--r-- root root 36K Jun 10 00:41] config2.sql
├── [-rw------- root root 1.9M Jun 12 13:56] config.db
├── [-rw------- root root 36K Jun 10 00:41] config.db.original.broken
├── [-rw------- root root 32K Jun 12 14:06] config.db-shm
├── [-rw------- root root 3.9M Jun 12 14:06] config.db-wal
└── [-rw------- root root 0 Jun 12 13:18] .pmxcfs.lockfile
/var/lib/pve-firewall
├── [-rw-r--r-- root root 240 Jun 12 14:06] ebtablescmdlist
├── [-rw-r--r-- root root 15 Jun 12 14:06] ip4cmdlist
├── [-rw-r--r-- root root 12 Jun 12 14:06] ip4cmdlistraw
├── [-rw-r--r-- root root 15 Jun 12 14:06] ip6cmdlist
├── [-rw-r--r-- root root 12 Jun 12 14:06] ip6cmdlistraw
├── [-rw-r--r-- root root 0 Jun 12 14:06] ipsetcmdlist1
├── [-rw-r--r-- root root 0 Jun 12 14:06] ipsetcmdlist2
└── [-rw-r--r-- root root 1 Jun 8 22:46] log_nf_conntrack
/var/lib/pve-manager
├── [drwxr-xr-x root root 4.0K Jun 12 05:56] apl-info
│ ├── [-rw-r--r-- root root 14K Jun 12 05:55] download.proxmox.com
│ └── [-rw-r--r-- root root 56K Jun 12 05:56] releases.turnkeylinux.org
├── [drwxr-xr-x root root 4.0K Jun 12 04:24] jobs
│ └── [-rw-r--r-- root root 138 Jun 12 04:24] vzdump-backup-825bc1f1-67f1.json
├── [-rw-r--r-- root root 4.2K Jun 12 05:56] pkgupdates
├── [-rw-r--r-- root root 2 Jun 12 14:06] pve-replication-state.json
└── [-rw-r--r-- root root 0 Jul 27 2022] pve-replication-state.lck
2 directories, 20 files
Code:
/etc/pve/
├── [-rw-r----- root www-data 451 Jun 12 00:46] authkey.pub
├── [-rw-r----- root www-data 451 Jun 12 00:46] authkey.pub.old
├── [-rw-r----- root www-data 451 Jun 9 23:43] authkey.pub.tmp.1708
├── [-r--r----- root www-data 156 Jan 1 1970] .clusterlog
├── [-rw-r----- root www-data 2 Jan 1 1970] .debug
├── [drwxr-xr-x root www-data 0 May 16 14:55] firewall
│ ├── [-rw-r----- root www-data 88 May 18 01:18] 100.fw
│ └── [-rw-r----- root www-data 54 May 18 01:18] cluster.fw
├── [drwxr-xr-x root www-data 0 Jul 27 2022] ha
├── [-rw-r----- root www-data 229 Jan 20 19:50] jobs.cfg
├── [lrwxr-xr-x root www-data 0 Jan 1 1970] local -> nodes/prox00
├── [lrwxr-xr-x root www-data 0 Jan 1 1970] lxc -> nodes/prox00/lxc
├── [-r--r----- root www-data 39 Jan 1 1970] .members
├── [drwxr-xr-x root www-data 0 Jul 27 2022] nodes
│ └── [drwxr-xr-x root www-data 0 Jul 27 2022] prox00
│ ├── [-rw-r----- root www-data 205 May 18 01:17] host.fw
│ ├── [-rw-r----- root www-data 83 Jun 12 14:05] lrm_status
│ ├── [drwxr-xr-x root www-data 0 Jul 27 2022] lxc
│ ├── [drwxr-xr-x root www-data 0 Jul 27 2022] openvz
│ ├── [drwx------ root www-data 0 Jul 27 2022] priv
│ ├── [-rw-r----- root www-data 1.6K Jul 27 2022] pve-ssl.key
│ ├── [-rw-r----- root www-data 1.6K Jun 6 01:04] pve-ssl.pem
│ └── [drwxr-xr-x root www-data 0 Jul 27 2022] qemu-server
│ ├── [-rw-r----- root www-data 564 Jun 12 04:19] 100.conf
│ └── [-rw-r----- root www-data 576 Jun 12 04:23] 101.conf
├── [lrwxr-xr-x root www-data 0 Jan 1 1970] openvz -> nodes/prox00/openvz
├── [drwx------ root www-data 0 Jul 27 2022] priv
│ ├── [drwx------ root www-data 0 Jul 27 2022] acme
│ ├── [-rw------- root www-data 1.6K Jun 12 00:46] authkey.key
│ ├── [-rw------- root www-data 406 Jun 10 00:45] authorized_keys
│ ├── [-rw------- root www-data 1.1K Jun 10 00:45] known_hosts
│ ├── [-rw------- root www-data 0 Jun 9 09:38] known_hosts.tmp.219328
│ ├── [drwx------ root www-data 0 Jul 27 2022] lock
│ ├── [-rw------- root www-data 3.2K Jul 27 2022] pve-root-ca.key
│ ├── [-rw------- root www-data 3 Jun 6 01:04] pve-root-ca.srl
│ └── [-rw------- root www-data 63 Jun 10 23:46] shadow.cfg
├── [-rw-r----- root www-data 2.0K Jul 27 2022] pve-root-ca.pem
├── [-rw-r----- root www-data 1.6K Jul 27 2022] pve-www.key
├── [lrwxr-xr-x root www-data 0 Jan 1 1970] qemu-server -> nodes/prox00/qemu-server
├── [-r--r----- root www-data 503 Jan 1 1970] .rrd
├── [drwxr-xr-x root www-data 0 Jul 27 2022] sdn
├── [-rw-r----- root www-data 128 Jun 10 23:46] user.cfg
├── [-r--r----- root www-data 773 Jan 1 1970] .version
├── [drwxr-xr-x root www-data 0 Jul 27 2022] virtual-guest
├── [-r--r----- root www-data 146 Jan 1 1970] .vmlist
└── [-rw-r----- root www-data 119 Jul 27 2022] vzdump.cron
17 directories, 29 files
cat /etc/pve/user.cfg
user:admin@pve:1:0::::::
user:root@pam:1:0::::::
group:admin:admin@pve:System Administrators:
acl:1:/: @admin:Administrator:
user:admin@pve:1:0::::::
user:root@pam:1:0::::::
group:admin:admin@pve:System Administrators:
acl:1:/: @admin:Administrator:
Code:
root@prox00 ~ # systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-06-12 13:18:31 MSK; 1h 24min ago
Main PID: 1602760 (pmxcfs)
Tasks: 7 (limit: 154402)
Memory: 46.2M
CPU: 2.986s
CGroup: /system.slice/pve-cluster.service
└─1602760 /usr/bin/pmxcfs
Jun 12 13:18:25 prox00 systemd[1]: Starting The Proxmox VE cluster filesystem...
Jun 12 13:18:31 prox00 systemd[1]: Started The Proxmox VE cluster filesystem.
root@prox00 ~ #
root@prox00 ~ # systemctl status pvedaemon
● pvedaemon.service - PVE API Daemon
Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2023-06-10 00:45:14 MSK; 2 days ago
Main PID: 487982 (pvedaemon)
Tasks: 4 (limit: 154402)
Memory: 134.5M
CPU: 17.136s
CGroup: /system.slice/pvedaemon.service
├─ 487982 pvedaemon
├─1624544 pvedaemon worker
├─1624545 pvedaemon worker
└─1624546 pvedaemon worker
Jun 12 14:29:09 prox00 pvedaemon[487982]: starting 3 worker(s)
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624544 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624545 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624546 started
Jun 12 14:29:14 prox00 pvedaemon[487984]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487985]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487983]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487984 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487985 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487983 finished
root@prox00 ~ # systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2023-06-10 00:45:21 MSK; 2 days ago
Main PID: 488029 (pvestatd)
Tasks: 1 (limit: 154402)
Memory: 82.2M
CPU: 6min 45.880s
CGroup: /system.slice/pvestatd.service
└─488029 pvestatd
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[2] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[3] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[4] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: status update error: Connection refused
Jun 12 14:29:09 prox00 systemd[1]: Reloading PVE Status Daemon.
Jun 12 14:29:09 prox00 pvestatd[1624554]: send HUP to 488029
Jun 12 14:29:09 prox00 pvestatd[488029]: received signal HUP
Jun 12 14:29:09 prox00 pvestatd[488029]: server shutdown (restart)
Jun 12 14:29:09 prox00 systemd[1]: Reloaded PVE Status Daemon.
Jun 12 14:29:10 prox00 pvestatd[488029]: restarting server
root@prox00 ~ #
root@prox00 ~ # systemctl status pveproxy
● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2023-06-12 14:07:52 MSK; 35min ago
Main PID: 1617074 (pveproxy)
Tasks: 4 (limit: 154402)
Memory: 226.3M
CPU: 2.808s
CGroup: /system.slice/pveproxy.service
├─1617074 pveproxy
├─1624556 pveproxy worker
├─1624557 pveproxy worker
└─1624558 pveproxy worker
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624556 started
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624557 started
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624558 started
Jun 12 14:29:14 prox00 pveproxy[1617075]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617076]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617077]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617075 finished
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617076 finished
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617077 finished
Jun 12 14:41:34 prox00 pveproxy[1624557]: proxy detected vanished client connection
https://forum.proxmox.com/threads/error-database-disk-image-is-malformed.33415/
I check file config.db. Was error - database disk image is malformed
Code:
database disk image is malformed
sqlite> .mode insert
sqlite> .output config.sql
sqlite> .dump
sqlite> .exit
root@xxxxx:~# mv /var/lib/pve-cluster/config.db config.db.original
root@xxxxx:~# sqlite3 /var/lib/pve-cluster/config.db < config.sql
root@xxxxx:~# sqlite3 /var/lib/pve-cluster/config.db
sqlite> analyze;
sqlite> PRAGMA integrity_check;
ok
root@prox00 ~ # pvecm updatecerts --force
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts
Please give me advice on what to do.
Last edited: