Proxmox server can'T login in the webui

KathouQC

New Member
Mar 26, 2023
10
0
1
Hi,

I am able to connect with SSH to the server but on the webui I got this.

"Wrong password or username" and if I restart the server, I am able to log in but it's pretty slow and after like a couple of hours, I am not able to relog in.

I try:

Clear cache
Update the server to the lastest version

Please take note my servers in on a clusrter on 5 servers and only 3 nodes are online and 1 I am able to log in the webui
Code:
systemctl status pve-manager =


    Loaded: loaded (/lib/systemd/system/pve-guests.service; enabled; vendor preset: enabled)
     Active: active (exited) since Wed 2023-09-06 20:42:06 EDT; 4 days ago
   Main PID: 26731 (code=exited, status=0/SUCCESS)
      Tasks: 0 (limit: 202232)
     Memory: 0B
        CPU: 0
     CGroup: /system.slice/pve-guests.service


Sep 06 20:41:59 R740XD-1 pvesh[26731]: Starting CT 126
Sep 06 20:41:59 R740XD-1 pve-guests[26734]: <root@pam> starting task UPID:R740XD-1:000123CD:00015C71:64F91C57:vzstart:126:root@pam:
Sep 06 20:41:59 R740XD-1 pve-guests[74701]: starting CT 126: UPID:R740XD-1:000123CD:00015C71:64F91C57:vzstart:126:root@pam:
Sep 06 20:42:02 R740XD-1 pvesh[26731]: Starting CT 128
Sep 06 20:42:02 R740XD-1 pve-guests[26734]: <root@pam> starting task UPID:R740XD-1:00012F81:00015D9F:64F91C5A:vzstart:128:root@pam:
Sep 06 20:42:02 R740XD-1 pve-guests[77697]: starting CT 128: UPID:R740XD-1:00012F81:00015D9F:64F91C5A:vzstart:128:root@pam:
Sep 06 20:42:05 R740XD-1 pve-guests[77697]: startup for container '128' failed
Sep 06 20:42:06 R740XD-1 pvesh[26731]: Starting CT 128 failed: startup for container '128' failed
Sep 06 20:42:06 R740XD-1 pve-guests[26731]: <root@pam> end task UPID:R740XD-1:0000686E:00005A81:64F919C3:startall::root@pam: OK
Sep 06 20:42:06 R740XD-1 systemd[1]: Finished PVE guests.




For the systemctl:


ep 11 06:00:43 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4689 ms
Sep 11 06:00:50 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 10943 ms
Sep 11 06:00:54 R740XD-1 corosync[26126]:   [QUORUM] Sync members[3]: 1 2 3
Sep 11 06:00:54 R740XD-1 corosync[26126]:   [TOTEM ] A new membership (1.b7f5c) was formed. Members
Sep 11 06:00:58 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4689 ms
Sep 11 06:01:05 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 10943 ms
Sep 11 06:01:09 R740XD-1 corosync[26126]:   [QUORUM] Sync members[3]: 1 2 3
Sep 11 06:01:09 R740XD-1 corosync[26126]:   [TOTEM ] A new membership (1.b7f68) was formed. Members
Sep 11 06:01:13 R740XD-1 systemd[1]: systemd-timedated.service: Succeeded.
Sep 11 06:01:13 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4689 ms
Sep 11 06:01:20 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 10943 ms
Sep 11 06:01:24 R740XD-1 corosync[26126]:   [QUORUM] Sync members[3]: 1 2 3
Sep 11 06:01:24 R740XD-1 corosync[26126]:   [TOTEM ] A new membership (1.b7f74) was formed. Members
Sep 11 06:01:28 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4690 ms
Sep 11 06:01:35 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 10943 ms
Sep 11 06:01:39 R740XD-1 corosync[26126]:   [QUORUM] Sync members[3]: 1 2 3
Sep 11 06:01:39 R740XD-1 corosync[26126]:   [TOTEM ] A new membership (1.b7f80) was formed. Members
Sep 11 06:01:43 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4688 ms
Sep 11 06:01:50 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 10942 ms
Sep 11 06:01:54 R740XD-1 corosync[26126]:   [QUORUM] Sync members[3]: 1 2 3
Sep 11 06:01:54 R740XD-1 corosync[26126]:   [TOTEM ] A new membership (1.b7f8c) was formed. Members
Sep 11 06:01:58 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4688 ms
Sep 11 06:02:05 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 10942 ms
Sep 11 06:02:09 R740XD-1 corosync[26126]:   [QUORUM] Sync members[3]: 1 2 3
Sep 11 06:02:09 R740XD-1 corosync[26126]:   [TOTEM ] A new membership (1.b7f98) was formed. Members
Sep 11 06:02:13 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4688 ms
Sep 11 06:02:20 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 10942 ms
Sep 11 06:02:24 R740XD-1 corosync[26126]:   [QUORUM] Sync members[3]: 1 2 3
Sep 11 06:02:24 R740XD-1 corosync[26126]:   [TOTEM ] A new membership (1.b7fa4) was formed. Members
Sep 11 06:02:28 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4689 ms
Sep 11 06:02:35 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 10942 ms
Sep 11 06:02:39 R740XD-1 corosync[26126]:   [QUORUM] Sync members[3]: 1 2 3
Sep 11 06:02:39 R740XD-1 corosync[26126]:   [TOTEM ] A new membership (1.b7fb0) was formed. Members
Sep 11 06:02:43 R740XD-1 corosync[26126]:   [TOTEM ] Token has not been received in 4689 ms
root@R740XD-1:~#  not been received in 10943 ms
 
May you provide us with the syslog since latest boot? you can do that by issuing the following command:

Bash:
journalctl -b > /tmp/syslog.txt

then attach the syslog.txt to this thread.
 
Thank you for the syslog!

I would check the network, especially the Corosync network config (you can post the Corosync config here in order to check for the Corosync config). And make sure that time is the same on all nodes. Please also check the chrony if installed on all nodes.
 
Cluster Information.

I am able to ping the other server from the ssh of each machine

The time is the same on all the server.

Too did I will be able to updagrade to proxmox 8.xx or It's will not fix the issue?


Code:
Cluster information
-------------------
Name:             KathouQC-HomeDC
Config Version:   9
Transport:        knet
Secure auth:      on


Quorum information
------------------
Date:             Tue Sep 12 08:55:05 2023
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000002
Ring ID:          1.9ea4c
Quorate:          Yes


Votequorum information
----------------------
Expected votes:   10
Highest expected: 10
Total votes:      7
Quorum:           6
Flags:            Quorate


Membership information
----------------------
    Nodeid      Votes Name
0x00000001          5 10.0.0.33
0x00000002          1 10.0.0.58 (local)
0x00000003          1 10.0.0.217
 
That is for the upgrade ot 8.xx I have try



=
Code:
CHECKING VERSION INFORMATION FOR PVE PACKAGES =


Checking for package updates..
PASS: all packages up-to-date


Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 7.4-1


Checking running kernel version..
PASS: running kernel '5.15.102-1-pve' is considered suitable for upgrade.


= CHECKING CLUSTER HEALTH/SETTINGS =


PASS: systemd unit 'pve-cluster.service' is in state 'active'
PASS: systemd unit 'corosync.service' is in state 'active'
PASS: Cluster Filesystem is quorate.


Analzying quorum settings and state..
WARN: non-default quorum_votes distribution detected!
FAIL: 4 nodes are offline!
INFO: configured votes - nodes: 10
INFO: configured votes - qdevice: 0
INFO: current expected votes: 10
INFO: current total votes: 7
WARN: total votes < expected votes: 7/10!


Checking nodelist entries..
PASS: nodelist settings OK


Checking totem settings..
PASS: totem settings OK


INFO: run 'pvecm status' to get detailed cluster status..


= CHECKING HYPER-CONVERGED CEPH STATUS =


SKIP: no hyper-converged ceph setup detected!


= CHECKING CONFIGURED STORAGES =


PASS: storage 'BackUP-Server' enabled and active.
SKIP: storage 'Backup_Server' disabled.
SKIP: storage 'Jupiter' disabled.
SKIP: storage 'Mars' disabled.
SKIP: storage 'R730XD' disabled.
PASS: storage 'R740XD-1' enabled and active.
SKIP: storage 'VM-SONAR' disabled.
PASS: storage 'local' enabled and active.
PASS: storage 'local-zfs' enabled and active.
INFO: Checking storage content type configuration..
PASS: no storage content problems found
PASS: no storage re-uses a directory for multiple content types.


= MISCELLANEOUS CHECKS =


INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvescheduler.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for supported & active NTP service..
PASS: Detected active time synchronisation unit 'chrony.service'
INFO: Checking for running guests..
WARN: 12 running guest(s) detected - consider migrating or stopping them.
INFO: Checking if the local node's hostname 'R740XD-1' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '10.0.0.58' configured and active on single interface.
INFO: Check node certificate's RSA key size
PASS: Certificate 'pve-root-ca.pem' passed Debian Busters (and newer) security level for TLS connections (4096 >= 2048)
PASS: Certificate 'pve-ssl.pem' passed Debian Busters (and newer) security level for TLS connections (2048 >= 2048)
INFO: Checking backup retention settings..
PASS: no backup retention problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking permission system changes..
INFO: Checking custom role IDs for clashes with new 'PVE' namespace..
PASS: no custom roles defined, so no clash with 'PVE' role ID namespace enforced in Proxmox VE 8
INFO: Checking if LXCFS is running with FUSE3 library, if already upgraded..
SKIP: not yet upgraded, no need to check the FUSE library version LXCFS uses
INFO: Checking node and guest description/note length..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking if the suite for the Debian security repository is correct..
PASS: found no suite mismatch
INFO: Checking for existence of NVIDIA vGPU Manager..
PASS: No NVIDIA vGPU Service found.
INFO: Checking bootloader configuration...
SKIP: not yet upgraded, no need to check the presence of systemd-boot
SKIP: NOTE: Expensive checks, like CT cgroupv2 compat, not performed without '--full' parameter


= SUMMARY =


TOTAL:    43
PASSED:   30
SKIPPED:  9
WARNINGS: 3
FAILURES: 1


ATTENTION: Please check the output for detailed information!
Try to solve the problems one at a time and then run this checklist tool again.
 
I would fix the issue before you upgrade to Proxmox VE 8.x.

Please post the output of cat /etc/pve/corosync.conf command!
Code:
logging {        cat /etc/pve/corosync.conf
                 cat /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}


nodelist {
  node {
    name: Jupiter
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 10.0.0.217
    ring1_addr: 10.0.0.219
  }
  node {
    name: Mars
    nodeid: 4
    quorum_votes: 1
    ring0_addr: 10.0.0.206
    ring1_addr: 10.0.0.208
  }
  node {
    name: Neptune
    nodeid: 5
    quorum_votes: 1
    ring0_addr: 10.0.0.60
    ring1_addr: 10.0.0.61
  }
  node {
    name: Pluton
    nodeid: 6
    quorum_votes: 0
    ring0_addr: 10.0.0.16
    ring1_addr: 10.0.0.17
  }
  node {
    name: R740XD-1
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 10.0.0.58
    ring1_addr: 10.0.0.59
  }
  node {
    name: Soleil
    nodeid: 7
    quorum_votes: 1
    ring0_addr: 10.0.0.148
    ring1_addr: 10.0.0.149
  }
  node {
    name: r730xd
    nodeid: 1
    quorum_votes: 5
    ring0_addr: 10.0.0.33
    ring1_addr: 10.0.0.44
  }
}


quorum {
  provider: corosync_votequorum
}


totem {
  cluster_name: KathouQC-HomeDC
  config_version: 9
  interface {
    linknumber: 0
  }
  interface {
    linknumber: 1
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}
 
Thank you for the output!

You have two rings with the same IP subnet, we recommend having separate NIC for Corosync or adding a new ring with a different IP on the first ring_0. The multiple links ensure that if one network path fails or is under heavy load, the other can take over, ensuring that the cluster remains operational. That is why we recommend adding a second link to the corosync configuration.

Question: Have you tried to restart the corosync and pve-cluster services?

Bash:
systemctl restart corosync.service
systemctl restart pve-cluster.service
 
I have try with the command , I think this has work because no error show, but I still can't login
 
I know before I change to sudnet mask 22 I was on 24 did this can break proxmox?
It doesn't make sense if it's on the same subnet

The network configuration should you see in Proxmox VE side; `cat /etc/network/interfaces`.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!