[SOLVED] Login failed. Please try again. Linix PAM auth

alexander kolomyts

New Member
Jun 27, 2022
17
0
1
Ukraine, Kremenchuk
Hello.
I can’t log in to the WebUI of our proxmox via Linux PAM, but I connect via ssh with the same access. Only 1 server. Without a cluster.
Only one user - root.
I tried all the methods from similar topics on this forum - nothing helped
- reboot server
- Added a new user (admin@pve)
- changed root password
- there are no messages in /var/log/auth.log on failed login.
- restored sqlite3 /var/lib/pve-cluster/config.db
- checked smartctl
- restarted all services and the whole server
- journalctl reports "Input/output error"
Code:
Jun 10 00:44:14 prox00 pve-ha-lrm[1828]: unable to write lrm status file - unable to open file '/etc/pve/nodes/prox00/lrm_status.tmp.1828' - Input/output error
Jun 10 00:44:10 prox00 pvescheduler[487596]: replication: cfs-lock 'file-replication_cfg' error: got lock request timeout
Jun 10 00:44:10 prox00 pvescheduler[487597]: jobs: cfs-lock 'file-jobs_cfg' error: got lock request timeout
...
Jun 09 23:43:58 prox00 pvestatd[1708]: Reverting to previous auth key
Jun 09 23:43:58 prox00 pvestatd[1708]: Failed to store new auth key - close (rename) atomic file '/etc/pve/authkey.pub' failed: Input/output error
Jun 09 23:43:58 prox00 pmxcfs[468357]: [database] crit: delete_inode failed: database disk image is malformed#010
Jun 09 23:43:58 prox00 pvestatd[1708]: auth key pair too old, rotating..
...           
Jun 09 23:43:54 prox00 pvestatd[1708]: authkey rotation error: cfs-lock 'authkey' error: no quorum!
...
Jun 09 15:06:18 prox00 pveproxy[219340]: proxy detected vanished client connection

Code:
root@prox00 ~ # uname -a
Linux prox00 5.15.107-2-pve #1 SMP PVE 5.15.107-2 (2023-05-10T09:10Z) x86_64 GNU/Linux

root@prox00 ~ # pveperf
CPU BOGOMIPS:      217178.56
REGEX/SECOND:      4584998
HD SIZE:           3502.89 GB (/dev/md2)
BUFFERED READS:    2294.92 MB/sec
AVERAGE SEEK TIME: 0.04 ms
FSYNCS/SECOND:     10946.70
DNS EXT:           260.10 ms

root@prox00 ~ # pvecm status
Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?

proxmox-ve: 7.2-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-4 (running version: 7.4-4/4a8501a8)
pve-kernel-5.15: 7.4-3
pve-kernel-helper: 7.2-7
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.39-2-pve: 5.15.39-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx4
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.7.0
pve-cluster: 7.3-3
pve-container: 4.4-4
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-2
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1

Code:
Filesystem      Size  Used Avail Use% Mounted on
udev             63G     0   63G   0% /dev
tmpfs            13G  1.2M   13G   1% /run
/dev/md2        3.5T  1.7T  1.6T  51% /
tmpfs            63G   31M   63G   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/md1        989M  164M  775M  18% /boot
tmpfs            13G     0   13G   0% /run/user/0
/dev/fuse       128M   20K  128M   1% /etc/pve

I run commands:
Code:
systemctl stop pve-cluster
rm -f /var/lib/pve-cluster/.pmxcfs.lockfile
systemctl start pve-cluster
maybe that helped.

Code:
/var/lib/pve-cluster
├── [-rw-r--r-- root     root      36K Jun 10 00:41]  config2.sql
├── [-rw------- root     root     1.9M Jun 12 13:56]  config.db
├── [-rw------- root     root      36K Jun 10 00:41]  config.db.original.broken
├── [-rw------- root     root      32K Jun 12 14:06]  config.db-shm
├── [-rw------- root     root     3.9M Jun 12 14:06]  config.db-wal
└── [-rw------- root     root        0 Jun 12 13:18]  .pmxcfs.lockfile
/var/lib/pve-firewall
├── [-rw-r--r-- root     root      240 Jun 12 14:06]  ebtablescmdlist
├── [-rw-r--r-- root     root       15 Jun 12 14:06]  ip4cmdlist
├── [-rw-r--r-- root     root       12 Jun 12 14:06]  ip4cmdlistraw
├── [-rw-r--r-- root     root       15 Jun 12 14:06]  ip6cmdlist
├── [-rw-r--r-- root     root       12 Jun 12 14:06]  ip6cmdlistraw
├── [-rw-r--r-- root     root        0 Jun 12 14:06]  ipsetcmdlist1
├── [-rw-r--r-- root     root        0 Jun 12 14:06]  ipsetcmdlist2
└── [-rw-r--r-- root     root        1 Jun  8 22:46]  log_nf_conntrack
/var/lib/pve-manager
├── [drwxr-xr-x root     root     4.0K Jun 12 05:56]  apl-info
│   ├── [-rw-r--r-- root     root      14K Jun 12 05:55]  download.proxmox.com
│   └── [-rw-r--r-- root     root      56K Jun 12 05:56]  releases.turnkeylinux.org
├── [drwxr-xr-x root     root     4.0K Jun 12 04:24]  jobs
│   └── [-rw-r--r-- root     root      138 Jun 12 04:24]  vzdump-backup-825bc1f1-67f1.json
├── [-rw-r--r-- root     root     4.2K Jun 12 05:56]  pkgupdates
├── [-rw-r--r-- root     root        2 Jun 12 14:06]  pve-replication-state.json
└── [-rw-r--r-- root     root        0 Jul 27  2022]  pve-replication-state.lck
2 directories, 20 files

Code:
/etc/pve/
├── [-rw-r----- root     www-data  451 Jun 12 00:46]  authkey.pub
├── [-rw-r----- root     www-data  451 Jun 12 00:46]  authkey.pub.old
├── [-rw-r----- root     www-data  451 Jun  9 23:43]  authkey.pub.tmp.1708
├── [-r--r----- root     www-data  156 Jan  1  1970]  .clusterlog
├── [-rw-r----- root     www-data    2 Jan  1  1970]  .debug
├── [drwxr-xr-x root     www-data    0 May 16 14:55]  firewall
│   ├── [-rw-r----- root     www-data   88 May 18 01:18]  100.fw
│   └── [-rw-r----- root     www-data   54 May 18 01:18]  cluster.fw
├── [drwxr-xr-x root     www-data    0 Jul 27  2022]  ha
├── [-rw-r----- root     www-data  229 Jan 20 19:50]  jobs.cfg
├── [lrwxr-xr-x root     www-data    0 Jan  1  1970]  local -> nodes/prox00
├── [lrwxr-xr-x root     www-data    0 Jan  1  1970]  lxc -> nodes/prox00/lxc
├── [-r--r----- root     www-data   39 Jan  1  1970]  .members
├── [drwxr-xr-x root     www-data    0 Jul 27  2022]  nodes
│   └── [drwxr-xr-x root     www-data    0 Jul 27  2022]  prox00
│       ├── [-rw-r----- root     www-data  205 May 18 01:17]  host.fw
│       ├── [-rw-r----- root     www-data   83 Jun 12 14:05]  lrm_status
│       ├── [drwxr-xr-x root     www-data    0 Jul 27  2022]  lxc
│       ├── [drwxr-xr-x root     www-data    0 Jul 27  2022]  openvz
│       ├── [drwx------ root     www-data    0 Jul 27  2022]  priv
│       ├── [-rw-r----- root     www-data 1.6K Jul 27  2022]  pve-ssl.key
│       ├── [-rw-r----- root     www-data 1.6K Jun  6 01:04]  pve-ssl.pem
│       └── [drwxr-xr-x root     www-data    0 Jul 27  2022]  qemu-server
│           ├── [-rw-r----- root     www-data  564 Jun 12 04:19]  100.conf
│           └── [-rw-r----- root     www-data  576 Jun 12 04:23]  101.conf
├── [lrwxr-xr-x root     www-data    0 Jan  1  1970]  openvz -> nodes/prox00/openvz
├── [drwx------ root     www-data    0 Jul 27  2022]  priv
│   ├── [drwx------ root     www-data    0 Jul 27  2022]  acme
│   ├── [-rw------- root     www-data 1.6K Jun 12 00:46]  authkey.key
│   ├── [-rw------- root     www-data  406 Jun 10 00:45]  authorized_keys
│   ├── [-rw------- root     www-data 1.1K Jun 10 00:45]  known_hosts
│   ├── [-rw------- root     www-data    0 Jun  9 09:38]  known_hosts.tmp.219328
│   ├── [drwx------ root     www-data    0 Jul 27  2022]  lock
│   ├── [-rw------- root     www-data 3.2K Jul 27  2022]  pve-root-ca.key
│   ├── [-rw------- root     www-data    3 Jun  6 01:04]  pve-root-ca.srl
│   └── [-rw------- root     www-data   63 Jun 10 23:46]  shadow.cfg
├── [-rw-r----- root     www-data 2.0K Jul 27  2022]  pve-root-ca.pem
├── [-rw-r----- root     www-data 1.6K Jul 27  2022]  pve-www.key
├── [lrwxr-xr-x root     www-data    0 Jan  1  1970]  qemu-server -> nodes/prox00/qemu-server
├── [-r--r----- root     www-data  503 Jan  1  1970]  .rrd
├── [drwxr-xr-x root     www-data    0 Jul 27  2022]  sdn
├── [-rw-r----- root     www-data  128 Jun 10 23:46]  user.cfg
├── [-r--r----- root     www-data  773 Jan  1  1970]  .version
├── [drwxr-xr-x root     www-data    0 Jul 27  2022]  virtual-guest
├── [-r--r----- root     www-data  146 Jan  1  1970]  .vmlist
└── [-rw-r----- root     www-data  119 Jul 27  2022]  vzdump.cron

17 directories, 29 files


cat /etc/pve/user.cfg
user:admin@pve:1:0::::::
user:root@pam:1:0::::::

group:admin:admin@pve:System Administrators:



acl:1:/: @admin:Administrator:


Code:
root@prox00 ~ # systemctl status pve-cluster
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-06-12 13:18:31 MSK; 1h 24min ago
   Main PID: 1602760 (pmxcfs)
      Tasks: 7 (limit: 154402)
     Memory: 46.2M
        CPU: 2.986s
     CGroup: /system.slice/pve-cluster.service
             └─1602760 /usr/bin/pmxcfs

Jun 12 13:18:25 prox00 systemd[1]: Starting The Proxmox VE cluster filesystem...
Jun 12 13:18:31 prox00 systemd[1]: Started The Proxmox VE cluster filesystem.
root@prox00 ~ #
root@prox00 ~ # systemctl status pvedaemon
● pvedaemon.service - PVE API Daemon
     Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2023-06-10 00:45:14 MSK; 2 days ago
   Main PID: 487982 (pvedaemon)
      Tasks: 4 (limit: 154402)
     Memory: 134.5M
        CPU: 17.136s
     CGroup: /system.slice/pvedaemon.service
             ├─ 487982 pvedaemon
             ├─1624544 pvedaemon worker
             ├─1624545 pvedaemon worker
             └─1624546 pvedaemon worker

Jun 12 14:29:09 prox00 pvedaemon[487982]: starting 3 worker(s)
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624544 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624545 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624546 started
Jun 12 14:29:14 prox00 pvedaemon[487984]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487985]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487983]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487984 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487985 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487983 finished
root@prox00 ~ # systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
     Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2023-06-10 00:45:21 MSK; 2 days ago
   Main PID: 488029 (pvestatd)
      Tasks: 1 (limit: 154402)
     Memory: 82.2M
        CPU: 6min 45.880s
     CGroup: /system.slice/pvestatd.service
             └─488029 pvestatd

Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[2] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[3] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[4] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: status update error: Connection refused
Jun 12 14:29:09 prox00 systemd[1]: Reloading PVE Status Daemon.
Jun 12 14:29:09 prox00 pvestatd[1624554]: send HUP to 488029
Jun 12 14:29:09 prox00 pvestatd[488029]: received signal HUP
Jun 12 14:29:09 prox00 pvestatd[488029]: server shutdown (restart)
Jun 12 14:29:09 prox00 systemd[1]: Reloaded PVE Status Daemon.
Jun 12 14:29:10 prox00 pvestatd[488029]: restarting server
root@prox00 ~ #
root@prox00 ~ # systemctl status pveproxy
● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-06-12 14:07:52 MSK; 35min ago
   Main PID: 1617074 (pveproxy)
      Tasks: 4 (limit: 154402)
     Memory: 226.3M
        CPU: 2.808s
     CGroup: /system.slice/pveproxy.service
             ├─1617074 pveproxy
             ├─1624556 pveproxy worker
             ├─1624557 pveproxy worker
             └─1624558 pveproxy worker

Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624556 started
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624557 started
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624558 started
Jun 12 14:29:14 prox00 pveproxy[1617075]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617076]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617077]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617075 finished
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617076 finished
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617077 finished
Jun 12 14:41:34 prox00 pveproxy[1624557]: proxy detected vanished client connection


https://forum.proxmox.com/threads/error-database-disk-image-is-malformed.33415/
I check file config.db. Was error - database disk image is malformed

Code:
database disk image is malformed
sqlite> .mode insert
sqlite> .output config.sql
sqlite> .dump
sqlite> .exit
root@xxxxx:~# mv /var/lib/pve-cluster/config.db config.db.original
root@xxxxx:~# sqlite3 /var/lib/pve-cluster/config.db < config.sql
root@xxxxx:~# sqlite3 /var/lib/pve-cluster/config.db
sqlite> analyze;
sqlite> PRAGMA integrity_check;
ok

root@prox00 ~ # pvecm updatecerts --force
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts

Please give me advice on what to do.
 
Last edited:
smartctl

oot@prox00 ~ # smartctl --scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d scsi # /dev/sdb, SCSI device
/dev/nvme0 -d nvme # /dev/nvme0, NVMe device
/dev/nvme1 -d nvme # /dev/nvme1, NVMe device
root@prox00 ~ # smartctl -A /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0
2 Throughput_Performance 0x0005 133 133 054 Pre-fail Offline - 92
3 Spin_Up_Time 0x0007 100 100 024 Pre-fail Always - 0
4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 1
5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0
7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 128 128 020 Pre-fail Offline - 18
9 Power_On_Hours 0x0012 099 099 000 Old_age Always - 7559
10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 1
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 301
193 Load_Cycle_Count 0x0012 100 100 000 Old_age Always - 301
194 Temperature_Celsius 0x0002 166 166 000 Old_age Always - 36 (Min/Max 25/47)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0

root@prox00 ~ # smartctl -A /dev/sdb
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0
2 Throughput_Performance 0x0005 134 134 054 Pre-fail Offline - 115
3 Spin_Up_Time 0x0007 185 185 024 Pre-fail Always - 325 (Average 386)
4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 73
5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0
7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0
8 Seek_Time_Performance 0x0005 140 140 020 Pre-fail Offline - 15
9 Power_On_Hours 0x0012 094 094 000 Old_age Always - 45453
10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 73
192 Power-Off_Retract_Count 0x0032 099 099 000 Old_age Always - 1974
193 Load_Cycle_Count 0x0012 099 099 000 Old_age Always - 1974
194 Temperature_Celsius 0x0002 162 162 000 Old_age Always - 37 (Min/Max 24/53)
196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0
197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 0

root@prox00 ~ # smartctl -A /dev/nvme0
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF SMART DATA SECTION ===
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 36 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 1%
Data Units Read: 944,416,359 [483 TB]
Data Units Written: 511,781,777 [262 TB]
Host Read Commands: 15,578,546,973
Host Write Commands: 6,566,355,691
Controller Busy Time: 18,971
Power Cycles: 12
Power On Hours: 8,393
Unsafe Shutdowns: 0
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 36 Celsius
Temperature Sensor 2: 47 Celsius

root@prox00 ~ # smartctl -A /dev/nvme1
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF SMART DATA SECTION ===
SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 45 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 1%
Data Units Read: 415,401,464 [212 TB]
Data Units Written: 526,602,170 [269 TB]
Host Read Commands: 3,230,912,739
Host Write Commands: 6,583,523,307
Controller Busy Time: 4,855
Power Cycles: 12
Power On Hours: 8,393
Unsafe Shutdowns: 1
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 1: 45 Celsius
Temperature Sensor 2: 53 Celsius

root@prox00 ~ # smartctl -H /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

root@prox00 ~ # smartctl -H /dev/sdb
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

root@prox00 ~ # smartctl -H /dev/nvme0
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

root@prox00 ~ # smartctl -H /dev/nvme1
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107-2-pve] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

Code:
root@prox00 ~ # lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda             8:0    0  5.5T  0 disk
└─sda1          8:1    0  5.5T  0 part
  └─md127       9:127  0  5.5T  0 raid1
    └─md127p1 259:10   0  5.5T  0 part
sdb             8:16   0  5.5T  0 disk
└─sdb1          8:17   0  5.5T  0 part
  └─md127       9:127  0  5.5T  0 raid1
    └─md127p1 259:10   0  5.5T  0 part
nvme0n1       259:0    0  3.5T  0 disk
├─nvme0n1p1   259:2    0   16G  0 part
│ └─md0         9:0    0   16G  0 raid1 [SWAP]
├─nvme0n1p2   259:3    0    1G  0 part
│ └─md1         9:1    0 1022M  0 raid1 /boot
├─nvme0n1p3   259:4    0  3.5T  0 part
│ └─md2         9:2    0  3.5T  0 raid1 /
└─nvme0n1p4   259:5    0    1M  0 part
nvme1n1       259:1    0  3.5T  0 disk
├─nvme1n1p1   259:6    0   16G  0 part
│ └─md0         9:0    0   16G  0 raid1 [SWAP]
├─nvme1n1p2   259:7    0    1G  0 part
│ └─md1         9:1    0 1022M  0 raid1 /boot
├─nvme1n1p3   259:8    0  3.5T  0 part
│ └─md2         9:2    0  3.5T  0 raid1 /
└─nvme1n1p4   259:9    0    1M  0 part
 
Last edited:
Hi,
what is the status of systemctl status corosync.service and the output of pvecm status.
It seems like at some point you had a cluster configured, as pvestatd complains about no quorum?
Jun 09 23:43:54 prox00 pvestatd[1708]: authkey rotation error: cfs-lock 'authkey' error: no quorum!
 
Hi,
what is the status of systemctl status corosync.service and the output of pvecm status.
It seems like at some point you had a cluster configured, as pvestatd complains about no quorum?

Thank you very match for help me

Code:
root@prox00 ~ # systemctl status corosync.service
● corosync.service - Corosync Cluster Engine
     Loaded: loaded (/lib/systemd/system/corosync.service; enabled; vendor preset: enabled)
     Active: inactive (dead)
  Condition: start condition failed at Mon 2023-06-12 13:18:31 MSK; 1h 38min ago
       Docs: man:corosync
             man:corosync.conf
             man:corosync_overview

Jun 08 23:55:08 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 09 00:15:14 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 09 00:18:53 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 09 15:39:08 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 09 23:43:58 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 09 23:44:05 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 10 00:44:52 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 10 00:45:14 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 10 00:50:09 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Jun 12 13:18:31 prox00 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.

root@prox00 ~ # pvecm status
Error: Corosync config '/etc/pve/corosync.conf' does not exist - is this node part of a cluster?
 
Last edited:
Hi,
what is the status of systemctl status corosync.service and the output of pvecm status.
It seems like at some point you had a cluster configured, as pvestatd complains about no quorum?
Code:
root@prox00 ~ # pvestatd status
running

We did not add to the cluster. Is there a way to switch the server to single mode?
 
Okay, no then the error message was misleading, as this is not a cluster. Please provide the current status of systemctl status pve-cluster pveproxy pvestatd pvedaemon. It seems that the initial issue with the corrupt pmxcfs database was already fixed by you.
 
Code:
root@prox00 ~ # systemctl status pve-cluster pveproxy pvestatd pvedaemon
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-06-12 13:18:31 MSK; 2h 7min ago
   Main PID: 1602760 (pmxcfs)
      Tasks: 7 (limit: 154402)
     Memory: 48.9M
        CPU: 4.487s
     CGroup: /system.slice/pve-cluster.service
             └─1602760 /usr/bin/pmxcfs

Jun 12 13:18:25 prox00 systemd[1]: Starting The Proxmox VE cluster filesystem...
Jun 12 13:18:31 prox00 systemd[1]: Started The Proxmox VE cluster filesystem.

● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-06-12 14:07:52 MSK; 1h 17min ago
   Main PID: 1617074 (pveproxy)
      Tasks: 4 (limit: 154402)
     Memory: 226.3M
        CPU: 2.973s
     CGroup: /system.slice/pveproxy.service
             ├─1617074 pveproxy
             ├─1624556 pveproxy worker
             ├─1624557 pveproxy worker
             └─1624558 pveproxy worker

Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624556 started
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624557 started
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624558 started
Jun 12 14:29:14 prox00 pveproxy[1617075]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617076]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617077]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617075 finished
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617076 finished
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617077 finished
Jun 12 14:41:34 prox00 pveproxy[1624557]: proxy detected vanished client connection

● pvestatd.service - PVE Status Daemon
     Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2023-06-10 00:45:21 MSK; 2 days ago
   Main PID: 488029 (pvestatd)
      Tasks: 1 (limit: 154402)
     Memory: 82.3M
        CPU: 6min 50.460s
     CGroup: /system.slice/pvestatd.service
             └─488029 pvestatd

Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[2] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[3] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[4] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: status update error: Connection refused
Jun 12 14:29:09 prox00 systemd[1]: Reloading PVE Status Daemon.
Jun 12 14:29:09 prox00 pvestatd[1624554]: send HUP to 488029
Jun 12 14:29:09 prox00 pvestatd[488029]: received signal HUP
Jun 12 14:29:09 prox00 pvestatd[488029]: server shutdown (restart)
Jun 12 14:29:09 prox00 systemd[1]: Reloaded PVE Status Daemon.
Jun 12 14:29:10 prox00 pvestatd[488029]: restarting server

● pvedaemon.service - PVE API Daemon
     Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2023-06-10 00:45:14 MSK; 2 days ago
   Main PID: 487982 (pvedaemon)
      Tasks: 4 (limit: 154402)
     Memory: 134.5M
        CPU: 17.303s
     CGroup: /system.slice/pvedaemon.service
             ├─ 487982 pvedaemon
             ├─1624544 pvedaemon worker
             ├─1624545 pvedaemon worker
             └─1624546 pvedaemon worker

Jun 12 14:29:09 prox00 pvedaemon[487982]: starting 3 worker(s)
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624544 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624545 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624546 started
Jun 12 14:29:14 prox00 pvedaemon[487984]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487985]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487983]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487984 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487985 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487983 finished

Code:
root@prox00 ~ # pmxcfs
[main] notice: unable to acquire pmxcfs lock - trying again
[main] crit: unable to acquire pmxcfs lock: Resource temporarily unavailable
[main] notice: exit proxmox configuration filesystem (-1)
 
Last edited:
Code:
root@prox00 ~ # systemctl status pve-cluster pveproxy pvestatd pvedaemon
● pve-cluster.service - The Proxmox VE cluster filesystem
     Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-06-12 13:18:31 MSK; 2h 7min ago
   Main PID: 1602760 (pmxcfs)
      Tasks: 7 (limit: 154402)
     Memory: 48.9M
        CPU: 4.487s
     CGroup: /system.slice/pve-cluster.service
             └─1602760 /usr/bin/pmxcfs

Jun 12 13:18:25 prox00 systemd[1]: Starting The Proxmox VE cluster filesystem...
Jun 12 13:18:31 prox00 systemd[1]: Started The Proxmox VE cluster filesystem.

● pveproxy.service - PVE API Proxy Server
     Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-06-12 14:07:52 MSK; 1h 17min ago
   Main PID: 1617074 (pveproxy)
      Tasks: 4 (limit: 154402)
     Memory: 226.3M
        CPU: 2.973s
     CGroup: /system.slice/pveproxy.service
             ├─1617074 pveproxy
             ├─1624556 pveproxy worker
             ├─1624557 pveproxy worker
             └─1624558 pveproxy worker

Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624556 started
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624557 started
Jun 12 14:29:09 prox00 pveproxy[1617074]: worker 1624558 started
Jun 12 14:29:14 prox00 pveproxy[1617075]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617076]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617077]: worker exit
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617075 finished
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617076 finished
Jun 12 14:29:14 prox00 pveproxy[1617074]: worker 1617077 finished
Jun 12 14:41:34 prox00 pveproxy[1624557]: proxy detected vanished client connection

● pvestatd.service - PVE Status Daemon
     Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2023-06-10 00:45:21 MSK; 2 days ago
   Main PID: 488029 (pvestatd)
      Tasks: 1 (limit: 154402)
     Memory: 82.3M
        CPU: 6min 50.460s
     CGroup: /system.slice/pvestatd.service
             └─488029 pvestatd

Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[2] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[3] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: ipcc_send_rec[4] failed: Connection refused
Jun 12 13:18:25 prox00 pvestatd[488029]: status update error: Connection refused
Jun 12 14:29:09 prox00 systemd[1]: Reloading PVE Status Daemon.
Jun 12 14:29:09 prox00 pvestatd[1624554]: send HUP to 488029
Jun 12 14:29:09 prox00 pvestatd[488029]: received signal HUP
Jun 12 14:29:09 prox00 pvestatd[488029]: server shutdown (restart)
Jun 12 14:29:09 prox00 systemd[1]: Reloaded PVE Status Daemon.
Jun 12 14:29:10 prox00 pvestatd[488029]: restarting server

● pvedaemon.service - PVE API Daemon
     Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2023-06-10 00:45:14 MSK; 2 days ago
   Main PID: 487982 (pvedaemon)
      Tasks: 4 (limit: 154402)
     Memory: 134.5M
        CPU: 17.303s
     CGroup: /system.slice/pvedaemon.service
             ├─ 487982 pvedaemon
             ├─1624544 pvedaemon worker
             ├─1624545 pvedaemon worker
             └─1624546 pvedaemon worker

Jun 12 14:29:09 prox00 pvedaemon[487982]: starting 3 worker(s)
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624544 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624545 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624546 started
Jun 12 14:29:14 prox00 pvedaemon[487984]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487985]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487983]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487984 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487985 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487983 finished

Code:
root@prox00 ~ # pmxcfs
[main] notice: unable to acquire pmxcfs lock - trying again
[main] crit: unable to acquire pmxcfs lock: Resource temporarily unavailable
[main] notice: exit proxmox configuration filesystem (-1)
Except from the last command (which does not work because the pve-cluster service already executed pmxcfs, so no need to start it manually), all services seem to be up and running without errors. Pleasae check that the user actually got created and try to log in afterwards.
 
Code:
root@prox00 ~ # pveum user list
┌───────────┬─────────┬───────┬────────┬────────┬───────────┬────────┬──────┬──────────┬────────────┬────────┐
│ userid    │ comment │ email │ enable │ expire │ firstname │ groups │ keys │ lastname │ realm-type │ tokens │
╞═══════════╪═════════╪═══════╪════════╪════════╪═══════════╪════════╪══════╪══════════╪════════════╪════════╡
│ admin@pve │         │       │ 1      │      0 │           │        │      │          │ pve        │        │
├───────────┼─────────┼───────┼────────┼────────┼───────────┼────────┼──────┼──────────┼────────────┼────────┤
│ root@pam  │         │       │ 1      │      0 │           │        │      │          │ pam        │        │
└───────────┴─────────┴───────┴────────┴────────┴───────────┴────────┴──────┴──────────┴────────────┴────────┘
root@prox00 ~ # cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
systemd-timesync:x:101:101:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
systemd-network:x:102:103:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:103:104:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:104:110::/nonexistent:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
_rpc:x:106:65534::/run/rpcbind:/usr/sbin/nologin
postfix:x:107:113::/var/spool/postfix:/usr/sbin/nologin
statd:x:108:65534::/var/lib/nfs:/usr/sbin/nologin
gluster:x:109:116::/var/lib/glusterd:/usr/sbin/nologin
tss:x:110:117:TPM software stack,,,:/var/lib/tpm:/bin/false
ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
zabbix:x:111:118::/nonexistent:/usr/sbin/nologin
tcpdump:x:112:119::/nonexistent:/usr/sbin/nologin

But "Login failed. Please try again."
 

Attachments

  • dTQjcKCv0F.png
    dTQjcKCv0F.png
    16.3 KB · Views: 15
Code:
root@prox00 ~ # pveum user list
┌───────────┬─────────┬───────┬────────┬────────┬───────────┬────────┬──────┬──────────┬────────────┬────────┐
│ userid    │ comment │ email │ enable │ expire │ firstname │ groups │ keys │ lastname │ realm-type │ tokens │
╞═══════════╪═════════╪═══════╪════════╪════════╪═══════════╪════════╪══════╪══════════╪════════════╪════════╡
│ admin@pve │         │       │ 1      │      0 │           │        │      │          │ pve        │        │
├───────────┼─────────┼───────┼────────┼────────┼───────────┼────────┼──────┼──────────┼────────────┼────────┤
│ root@pam  │         │       │ 1      │      0 │           │        │      │          │ pam        │        │
└───────────┴─────────┴───────┴────────┴────────┴───────────┴────────┴──────┴──────────┴────────────┴────────┘
root@prox00 ~ # cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
systemd-timesync:x:101:101:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
systemd-network:x:102:103:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:103:104:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:104:110::/nonexistent:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
_rpc:x:106:65534::/run/rpcbind:/usr/sbin/nologin
postfix:x:107:113::/var/spool/postfix:/usr/sbin/nologin
statd:x:108:65534::/var/lib/nfs:/usr/sbin/nologin
gluster:x:109:116::/var/lib/glusterd:/usr/sbin/nologin
tss:x:110:117:TPM software stack,,,:/var/lib/tpm:/bin/false
ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
zabbix:x:111:118::/nonexistent:/usr/sbin/nologin
tcpdump:x:112:119::/nonexistent:/usr/sbin/nologin

But "Login failed. Please try again."
You have to select the correct realm: Proxmox VE authentication server instead of Linux PAM standard authentication to login via users authenticated by pve instead of pam.

Edit: Sorry, I did not see that you tried with root, I was fixated on the admin user. Please post the output of tail -n 10 /var/log/auth.log and journalctl --since -30min -u pvedaemon.service. Are you sure your password is correct, what about the keyboard layout?
 
Last edited:
root@prox00 ~ # journalctl --since -180min -u pvedaemon.service
-- Journal begins at Wed 2022-07-27 21:26:58 MSK, ends at Mon 2023-06-12 16:17:01 MSK. --
Jun 12 14:29:08 prox00 systemd[1]: Reloading PVE API Daemon.
Jun 12 14:29:08 prox00 pvedaemon[1624539]: send HUP to 487982
Jun 12 14:29:08 prox00 pvedaemon[487982]: received signal HUP
Jun 12 14:29:08 prox00 pvedaemon[487982]: server closing
Jun 12 14:29:08 prox00 pvedaemon[487982]: server shutdown (restart)
Jun 12 14:29:08 prox00 systemd[1]: Reloaded PVE API Daemon.
Jun 12 14:29:09 prox00 pvedaemon[487982]: restarting server
Jun 12 14:29:09 prox00 pvedaemon[487982]: starting 3 worker(s)
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624544 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624545 started
Jun 12 14:29:09 prox00 pvedaemon[487982]: worker 1624546 started
Jun 12 14:29:14 prox00 pvedaemon[487984]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487985]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487983]: worker exit
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487984 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487985 finished
Jun 12 14:29:14 prox00 pvedaemon[487982]: worker 487983 finished
root@prox00 ~ # find /var/log -name "*" -mmin -10 -ls
218105905 44 -rw-r----- 1 root adm 39966 Jun 12 16:22 /var/log/daemon.log
218105229 24 -rw-r----- 1 www-data www-data 18923 Jun 12 16:22 /var/log/pveproxy/access.log
218105902 56 -rw-r----- 1 root adm 49679 Jun 12 16:22 /var/log/syslog
218105914 12 -rw-r----- 1 root adm 11042 Jun 12 16:17 /var/log/auth.log
218108680 32772 -rw-r----- 1 root systemd-journal 33554432 Jun 12 16:22 /var/log/journal/9b3a67246ee7464289595eec52061b7c/system.journal
root@prox00 ~ # tail /var/log/daemon.log
Jun 12 14:33:36 prox00 systemd[1]: Reloading.
Jun 12 14:41:34 prox00 pveproxy[1624557]: proxy detected vanished client connection
Jun 12 15:33:13 prox00 systemd[1]: session-107.scope: Succeeded.
Jun 12 15:33:13 prox00 systemd[1]: session-107.scope: Consumed 1.074s CPU time.
Jun 12 15:40:21 prox00 pmxcfs[1652386]: [main] notice: unable to acquire pmxcfs lock - trying again
Jun 12 15:40:31 prox00 pmxcfs[1652386]: [main] crit: unable to acquire pmxcfs lock: Resource temporarily unavailable
Jun 12 15:40:31 prox00 pmxcfs[1652386]: [main] notice: exit proxmox configuration filesystem (-1)
Jun 12 16:07:35 prox00 pveproxy[1624558]: detected empty handle
Jun 12 16:21:32 prox00 pveproxy[1624556]: proxy detected vanished client connection
Jun 12 16:22:13 prox00 pveproxy[1624557]: proxy detected vanished client connection
root@prox00 ~ #
root@prox00 ~ # tail /var/log/pveproxy/access.log
::ffff:188.239.121.175 - - [12/06/2023:16:07:36 +0300] "GET /pve2/ext6/theme-crisp/resources/images/tree/arrows.png HTTP/1.1" 200 3078
::ffff:188.239.121.175 - - [12/06/2023:16:07:36 +0300] "GET /pve2/ext6/theme-crisp/resources/images/grid/sort_desc.png HTTP/1.1" 200 18260
::ffff:188.239.121.175 - - [12/06/2023:16:07:36 +0300] "GET /pve2/images/proxmox_logo.png HTTP/1.1" 200 2809
::ffff:188.239.121.175 - - [12/06/2023:16:07:36 +0300] "GET /api2/json/access/domains HTTP/1.1" 200 159
::ffff:188.239.121.175 - - [12/06/2023:16:07:36 +0300] "GET /pve2/fa/fonts/fontawesome-webfont.woff2?v=4.7.0 HTTP/1.1" 200 77160
::ffff:188.239.121.175 - - [12/06/2023:16:21:02 +0300] "GET /pve2/ext6/theme-crisp/resources/images/loadmask/loading.gif HTTP/1.1" 200 1849
::ffff:188.239.121.175 - - [12/06/2023:16:21:32 +0300] "POST /api2/extjs/access/ticket HTTP/1.1" 500 -
::ffff:188.239.121.175 - - [12/06/2023:16:21:32 +0300] "GET /pve2/ext6/theme-crisp/resources/images/tools/tool-sprites.png HTTP/1.1" 200 24404
::ffff:188.239.121.175 - - [12/06/2023:16:21:32 +0300] "GET /pve2/ext6/theme-crisp/resources/images/shared/icon-error.png HTTP/1.1" 200 18494
::ffff:188.239.121.175 - - [12/06/2023:16:22:13 +0300] "POST /api2/extjs/access/ticket HTTP/1.1" 500 -
root@prox00 ~ #
root@prox00 ~ # tail /var/log/syslog
Jun 12 15:17:01 prox00 CRON[1645906]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 12 15:33:13 prox00 systemd[1]: session-107.scope: Succeeded.
Jun 12 15:33:13 prox00 systemd[1]: session-107.scope: Consumed 1.074s CPU time.
Jun 12 15:40:21 prox00 pmxcfs[1652386]: [main] notice: unable to acquire pmxcfs lock - trying again
Jun 12 15:40:31 prox00 pmxcfs[1652386]: [main] crit: unable to acquire pmxcfs lock: Resource temporarily unavailable
Jun 12 15:40:31 prox00 pmxcfs[1652386]: [main] notice: exit proxmox configuration filesystem (-1)
Jun 12 16:07:35 prox00 pveproxy[1624558]: detected empty handle
Jun 12 16:17:01 prox00 CRON[1664119]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jun 12 16:21:32 prox00 pveproxy[1624556]: proxy detected vanished client connection
Jun 12 16:22:13 prox00 pveproxy[1624557]: proxy detected vanished client connection
root@prox00 ~ # tail /var/log/auth.log
Jun 12 15:17:01 prox00 CRON[1645905]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jun 12 15:17:01 prox00 CRON[1645905]: pam_unix(cron:session): session closed for user root
Jun 12 15:33:13 prox00 sshd[1598201]: pam_unix(sshd:session): session closed for user root
Jun 12 15:33:13 prox00 systemd-logind[931]: Session 107 logged out. Waiting for processes to exit.
Jun 12 15:33:13 prox00 systemd-logind[931]: Removed session 107.
Jun 12 16:09:15 prox00 sudo: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/usr/bin/apt-get install smartmontools
Jun 12 16:09:15 prox00 sudo: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0)
Jun 12 16:09:15 prox00 sudo: pam_unix(sudo:session): session closed for user root
Jun 12 16:17:01 prox00 CRON[1664118]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jun 12 16:17:01 prox00 CRON[1664118]: pam_unix(cron:session): session closed for user root
root@prox00 ~ # tail /var/log/journal/9b3a67246ee7464289595eec52061b7c/system.journal
L¯8MĞþ82uÿ8Z-9a¢ÿ8"_ٽѐ4W8,F 9L֌/89B꘩O\@`9ւݿ|9.ŕ{EPј9£)09o
PX¬ֆ9X99KĥS|hgϨZ♰~ؖi@ %¦Œ㞊l$@f^ӆ³¬´lnHPC·
_6 5mDÿ@¬Rՠ燆¹8SYSLOG_PID=1664119LhµGDÿ<͸_PID=1664119k.#´¬V6 _SOURCE_REALTIME_TIMESTAMP=1686575821969518L°iZHK£!ӹAνˎ۱2Ļ2l¥\(³8֮¬O馘þ8
L¯8MĞþ82uW9L֌/89B꘩O\@`9ւݿ|9.ŕ{EPј9£)09o
PX¬ֆ9X99KĥS|hgϠ
9ɢsxXd
ý(Z♰~ؖi@" %¦@f^ӆ胋PC·
_6ŋՠ燆¹8hµGDÿ<̈.#´¬V688kʢ9њ_SOURCE_REALTIME_TIMESTAMP=1686575821970858΁M°°]H*ÿK£!ӹAνˎ۱2Zኟ8pl¥\(³z8nꁭׂ8֮¬O馘þ8
L¯8MĞþ82uÿ8Z-9a¢ÿ8"_ٽѐ4W8,F 9L֌/89B꘩O\@`9ւݿ|9.ŕ{EPј9£)09o
PX¬ֆ9X99KĥS|hgϰ9{۳OغĨZ♰~ؖi@ %¦Œ㞊l$@f^ӆ³¬´lnHPC·
_6(ʢ9њ88R@kKksuq ŋSYSLOG_PID=1624556ai7 et*5ߐSYSLOG_TIMESTAMP=Jun 12 16:21:32 *ZEMؾΠrSYSLOG_RAW=<28>Jun 12 16:21:32 pveproxy[1624556]: proxy detected vanished client connection
2ܸ_PID=1624556kȳنH(_SOURCE_REALTIME_TIMESTAMP=1686576092501112N°T᙮£!ӹAνˎ۱2r¸¿¨DHl¥\(³89B꘩O\@09o
PX¬ֆ9X99KĥS|hgҨ"9¢YiH:붔ˬX!: א:䇻폎¶ÿϓ:_~*@T:F:Y:璪%ȯ<PY:2Y:]M·BnH;۽ԹứXþCPsDmÿC}ÿsiH׼(Z♰~p`ƽ_؋@kKksuq 0i7 et*5ߘ*ZE2ވȳنHaO
00SYSLOG_TIMESTAMP=Jun 12 16:22:13 Ă/Հk¡r0SYSLOG_RAW=<28>Jun 12 16:22:13 pveproxy[1624557]: proxy detected vanished client connection
k먲¯Ξ0_SOURCE_REALTIME_TIMESTAMP=1686576133414324O°✑]?½K£!ӹAνˎ۱ӊѕpl¥\(³89B꘩O\@09o
PX¬ֆ9X99KĥS|hgҨ"9¢YiH:붔ˬX!: א:䇻폎¶ÿϓ:_~*@T:F:Y:璪%ȯ<PY:2Y:]M·BnH;۽ԹứXþCPsDmÿC}ÿsiH׼(Z♰~p`ƽ_XE1sD۫'¸Fٹ°ZǟŸO
Ă/Հk¡r먲¯Ξ80XshellXshellXshellXshellXshellXshellXshellXshellXshell
^X^C
root@prox00 ~ # date
Mon 12 Jun 2023 04:23:35 PM MSK
root@prox00 ~ # tail -n 10 /var/log/auth.log
Jun 12 15:17:01 prox00 CRON[1645905]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jun 12 15:17:01 prox00 CRON[1645905]: pam_unix(cron:session): session closed for user root
Jun 12 15:33:13 prox00 sshd[1598201]: pam_unix(sshd:session): session closed for user root
Jun 12 15:33:13 prox00 systemd-logind[931]: Session 107 logged out. Waiting for processes to exit.
Jun 12 15:33:13 prox00 systemd-logind[931]: Removed session 107.
Jun 12 16:09:15 prox00 sudo: root : TTY=pts/0 ; PWD=/root ; USER=root ; COMMAND=/usr/bin/apt-get install smartmontools
Jun 12 16:09:15 prox00 sudo: pam_unix(sudo:session): session opened for user root(uid=0) by root(uid=0)
Jun 12 16:09:15 prox00 sudo: pam_unix(sudo:session): session closed for user root
Jun 12 16:17:01 prox00 CRON[1664118]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Jun 12 16:17:01 prox00 CRON[1664118]: pam_unix(cron:session): session closed for user root
 
POST /api2/extjs/access/ticket HTTP/1.1" 500 -
Seems like your request doesn't even get trough to the pam auth and you get a 500 error response for the login request.

Please check the response body for further hints on what is wrong. You can do that by opening the browsers developer tools and go to the network tab. Then try to login and inspect the response to the POST request to ticket
 
Code:
Chain INPUT (policy DROP 248K packets, 14M bytes)
 pkts bytes target     prot opt in     out     source               destination
40658 2939K ACCEPT     tcp  --  *      *       xxx.xxx.xxx.xxx      0.0.0.0/0            multiport dports 2233,8006
  355 18388 DROP       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 2233,8006
   15   980 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:111
 248K   14M PVEFW-INPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT 47M packets, 4072M bytes)
 pkts bytes target     prot opt in     out     source               destination
 926M  837G PVEFW-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 52873 packets, 56M bytes)
 pkts bytes target     prot opt in     out     source               destination
52873   56M PVEFW-OUTPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain PVEFW-Drop (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PVEFW-DropBroadcast  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0            icmptype 3 code 4
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0            icmptype 11
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 135,445
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpts:137:139
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp spt:137 dpts:1024:65535
    0     0 DROP       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 135,139,445
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:1900
    0     0 DROP       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp flags:!0x17/0x02
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp spt:53
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:83WlR/a4wLbmURFqMQT3uJSgIG8 */

Chain PVEFW-DropBroadcast (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type BROADCAST
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type MULTICAST
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type ANYCAST
    0     0 DROP       all  --  *      *       0.0.0.0/0            224.0.0.0/4
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:NyjHNAtFbkH7WGLamPpdVnxHy4w */

Chain PVEFW-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination
 7777  397K DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
 880M  833G ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 PVEFW-FWBR-IN  all  --  *      *       0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-in fwln+ --physdev-is-bridged
   92 14719 PVEFW-FWBR-OUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-out fwln+ --physdev-is-bridged
  47M 4072M            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:qnNexOcGa+y+jebd4dAUqFSp5nw */

Chain PVEFW-FWBR-IN (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PVEFW-smurfs  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID,NEW
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:Ijl7/xz0DD7LF91MlLCz0ybZBE0 */

Chain PVEFW-FWBR-OUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
   92 14719            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:2jmj7l5rSw0yVb/vlWAYkK/YBwk */

Chain PVEFW-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
 248K   14M            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:2jmj7l5rSw0yVb/vlWAYkK/YBwk */

Chain PVEFW-OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
52873   56M            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:2jmj7l5rSw0yVb/vlWAYkK/YBwk */

Chain PVEFW-Reject (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PVEFW-DropBroadcast  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0            icmptype 3 code 4
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0            icmptype 11
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
    0     0 PVEFW-reject  udp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 135,445
    0     0 PVEFW-reject  udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpts:137:139
    0     0 PVEFW-reject  udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp spt:137 dpts:1024:65535
    0     0 PVEFW-reject  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 135,139,445
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp dpt:1900
    0     0 DROP       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp flags:!0x17/0x02
    0     0 DROP       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            udp spt:53
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:h3DyALVslgH5hutETfixGP08w7c */

Chain PVEFW-SET-ACCEPT-MARK (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x80000000
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:Hg/OIgIwJChBUcWU8Xnjhdd2jUY */

Chain PVEFW-logflags (5 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:MN4PH1oPZeABMuWr64RrygPfW7A */

Chain PVEFW-reject (4 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type BROADCAST
    0     0 DROP       all  --  *      *       224.0.0.0/4          0.0.0.0/0
    0     0 DROP       icmp --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 REJECT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with tcp-reset
    0     0 REJECT     udp  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
    0     0 REJECT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-unreachable
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:Jlkrtle1mDdtxDeI9QaDSL++Npc */

Chain PVEFW-smurflog (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:2gfT1VMkfr0JL6OccRXTGXo+1qk */

Chain PVEFW-smurfs (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0              0.0.0.0/0
    0     0 PVEFW-smurflog  all  --  *      *       0.0.0.0/0            0.0.0.0/0           [goto]  ADDRTYPE match src-type BROADCAST
    0     0 PVEFW-smurflog  all  --  *      *       224.0.0.0/4          0.0.0.0/0           [goto]
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:HssVe5QCBXd5mc9kC88749+7fag */

Chain PVEFW-tcpflags (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 PVEFW-logflags  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           [goto]  tcp flags:0x3F/0x29
    0     0 PVEFW-logflags  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           [goto]  tcp flags:0x3F/0x00
    0     0 PVEFW-logflags  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           [goto]  tcp flags:0x06/0x06
    0     0 PVEFW-logflags  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           [goto]  tcp flags:0x03/0x03
    0     0 PVEFW-logflags  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           [goto]  tcp spt:0 flags:0x17/0x02
    0     0            all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* PVESIG:CMFojwNPqllyqD67NeI5m+bP5mo */


root@prox00 ~ # iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 9824K packets, 604M bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  enp41s0 *       xxx.xxx.xxx.xxx      0.0.0.0/0            tcp dpt:33389 to:192.168.50.111:3389
    0     0 DNAT       tcp  --  enp41s0 *       xxx.xxx.xxx.xxx      0.0.0.0/0            tcp dpt:55532 to:192.168.50.111:80
 4427  308K DNAT       47   --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            to:192.168.50.102
    0     0 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55523 to:192.168.50.102:55523
    0     0 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55524 to:192.168.50.102:55524
    0     0 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55525 to:192.168.50.102:55525
    0     0 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55526 to:192.168.50.102:55526
    0     0 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55527 to:192.168.50.102:55527
    1    44 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55528 to:192.168.50.102:55528
    0     0 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55529 to:192.168.50.102:55529
    0     0 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55530 to:192.168.50.102:55530
    0     0 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:55531 to:192.168.50.102:55531
    5   260 DNAT       tcp  --  enp41s0 *       xxx.xxx.xxx.xxx      0.0.0.0/0            tcp dpt:22 to:192.168.50.102:22
51588 2744K DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:192.168.50.102:80
39529 2177K DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 to:192.168.50.102:443
    0     0 DNAT       tcp  --  enp41s0 *       xxx.xxx.xxx.xxx       0.0.0.0/0            tcp dpt:1723 to:192.168.50.102:1723
   12   688 DNAT       tcp  --  enp41s0 *       xxx.xxx.xxx.xxx      0.0.0.0/0            tcp dpt:3306 to:192.168.50.102:3306
    0     0 DNAT       tcp  --  enp41s0 *       xxx.xxx.xxx.xxx        0.0.0.0/0            tcp dpt:3306 to:192.168.50.102:3306
11103  665K DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8000 to:192.168.50.102:8000
 737K   44M DNAT       tcp  --  enp41s0 *       xxx.xxx.xxx.xxx       0.0.0.0/0            tcp dpt:10050 to:192.168.50.102:10050
 1246 77884 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:21 to:192.168.50.102:21
  570 32108 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:4433 to:192.168.50.102:4443
  619 32192 DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpts:3000:3100 to:192.168.50.102:3000-3100
  13M  768M DNAT       tcp  --  enp41s0 *       0.0.0.0/0            0.0.0.0/0            tcp dpts:8010:10000 to:192.168.50.102:8010-10000

Chain INPUT (policy ACCEPT 728 packets, 38184 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 5350 packets, 322K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 14M packets, 819M bytes)
 pkts bytes target     prot opt in     out     source               destination
9610K  592M MASQUERADE  all  --  *      enp41s0  192.168.50.0/24      0.0.0.0/0
    0     0 MASQUERADE  all  --  *      enp41s0  192.168.123.0/24     0.0.0.0/0
    0     0 MASQUERADE  all  --  *      enp41s0  192.168.50.0/24      0.0.0.0/0
 

Attachments

  • VmTKSyLP6W.png
    VmTKSyLP6W.png
    79.6 KB · Views: 26
Okay, suddenly it is a 595 connection refused error? Did the pve-cluster service fail again in the mean time? I see these in the syslog:
Jun 12 15:40:21 prox00 pmxcfs[1652386]: [main] notice: unable to acquire pmxcfs lock - trying again
Jun 12 15:40:31 prox00 pmxcfs[1652386]: [main] crit: unable to acquire pmxcfs lock: Resource temporarily unavailable
Jun 12 15:40:31 prox00 pmxcfs[1652386]: [main] notice: exit proxmox configuration filesystem (-1)
 
Run
Code:
systemctl pve-cluster restart
?
On a second look, it seems the pmxcfs errors stem from the time you tried to start it manually, so no worries there.
Please try to see if you get some more information from the response, you can use curl for that
Code:
curl -s -w "%{http_code}\\n" -k --data-urlencode "username=root" --data-urlencode "password=<yourpasswd>" "https://<ip-or-domain>:8006/api2/json/access/ticket"
 
The command
Code:
curl -s -w "%{http_code}\\n" -k --data-urlencode "username=root" --data-urlencode "password=<yourpasswd>" "https://<ip-or-domain>:8006/api2/json/access/ticket"
answered
Code:
000
You will have to put in your ip and password, not just copy, paste, execute as is
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!