[SOLVED] I created a cluster and I can no longer access the web interface

william841

New Member
Nov 24, 2023
13
1
3
I created a cluster where pve01 had the wrong peer address so I went into /etc/hosts and changed the address to connect with the correct address (the two machines are directly connected by a vlan) and after that I restarted pve02 after that I couldn't more log on the web,
I apologize if I missed any information, I'm a beginner
 
please provide more information
- pveversion -v
- network config
- /etc/hosts
- error messages / logs / service status

from all nodes!
 
PVE-01
####### pveversion -v
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Nov 23 22:38:54 2023 from 10.10.10.10
root@SRV-PVE-01:~#
root@SRV-PVE-01:~# pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-7 (running version: 7.1-7/df5740ad)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-2
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

######NETWORK

auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet manual
#placa pci giga

auto enxf8e43b6481cc
iface enxf8e43b6481cc inet static
address 192.168.198.2/24

auto enp3s0
iface enp3s0 inet static
address 192.168.200.2/30

auto vmbr0
iface vmbr0 inet manual
bridge-ports enp2s0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto vmbr0.250
iface vmbr0.250 inet static
address 192.168.199.2/24
gateway 192.168.199.1

auto vmbr0.251
iface vmbr0.251 inet manual

auto vmbr0.252
iface vmbr0.252 inet manual

auto vmbr0.253
iface vmbr0.253 inet manual

auto vmbr0.254
iface vmbr0.254 inet static
address 192.168.199.9/30

auto vmbr0.255
iface vmbr0.255 inet manual

auto vmbr0.256
iface vmbr0.256 inet manual

auto vmbr0.257
iface vmbr0.257 inet manual

auto vmbr0.258
iface vmbr0.258 inet manual

auto vmbr0.259
iface vmbr0.259 inet manual


######## /etc/hosts


127.0.0.1 localhost.localdomain localhost
192.168.199.9 SRV-PVE-01.ventosulpiratini.net.br SRV-PVE-01

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

##### syslog
Nov 24 09:37:14 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 24 09:37:14 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 24 09:37:14 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
Nov 24 09:37:14 SRV-PVE-01 pmxcfs[2463944]: [status] crit: cpg_initialize failed: 2
Nov 24 09:37:15 SRV-PVE-01 pvescheduler[2619062]: jobs: cfs-lock 'file-jobs_cfg' error: no quorum!
Nov 24 09:37:15 SRV-PVE-01 pvescheduler[2619061]: replication: cfs-lock 'file-replication_cfg' error: no quorum!
Nov 24 09:37:20 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 24 09:37:20 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 24 09:37:20 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
Nov 24 09:37:20 SRV-PVE-01 pmxcfs[2463944]: [status] crit: cpg_initialize failed: 2
Nov 24 09:37:23 SRV-PVE-01 pvestatd[1063]: unable to activate storage 'hd-500' - directory is expected to be a mount point but is not mounted: '/mnt/pve/hd-500'

######## journalctl -u corosync.service

Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [WD ] Watchdog not enabled by configuration
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [WD ] resource load_15min missing a recovery key.
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [WD ] resource memory_used missing a recovery key.
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [WD ] no resources configured.
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine loaded: corosync watchdog service [7]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [QUORUM] Using quorum provider corosync_votequorum
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [QUORUM] This node is within the primary component and will provide ser>
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [QUORUM] Members[0]:
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [QB ] server name: votequorum
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [QB ] server name: quorum
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [TOTEM ] Configuring link 0
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [TOTEM ] Configured link number 0: local addr: 192.168.199.9, port=5405
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [QUORUM] Sync members[1]: 1
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [QUORUM] Sync joined[1]: 1
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [TOTEM ] A new membership (1.5) was formed. Members joined: 1
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [QUORUM] Members[1]: 1
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 23 17:47:18 SRV-PVE-01 systemd[1]: Started Corosync Cluster Engine.
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: Stopping Corosync Cluster Engine...
Nov 23 17:54:45 SRV-PVE-01 corosync-cfgtool[2463034]: Shutting down corosync
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [CFG ] Node 1 was shut down by sysadmin
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Unloading all Corosync service engines.
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [QB ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine unloaded: corosync vote quorum service v1.0
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [QB ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine unloaded: corosync configuration map access
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [QB ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine unloaded: corosync configuration service
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [QB ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine unloaded: corosync cluster closed process group>
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [QB ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine unloaded: corosync profile loading service
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine unloaded: corosync resource monitoring service
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [SERV ] Service engine unloaded: corosync watchdog service
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]: [MAIN ] Corosync Cluster Engine exiting normally
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: corosync.service: Succeeded.
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: Stopped Corosync Cluster Engine.
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: corosync.service: Consumed 3.077s CPU time.
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: Starting Corosync Cluster Engine...
Nov 23 18:00:50 SRV-PVE-01 corosync[2463950]: [MAIN ] Corosync Cluster Engine 3.1.5 starting up
Nov 23 18:00:50 SRV-PVE-01 corosync[2463950]: [MAIN ] Corosync built-in features: dbus monitoring watchdog systemd x>
Nov 23 18:00:50 SRV-PVE-01 corosync[2463950]: [MAIN ] Could not open /etc/corosync/authkey: No such file or directory
Nov 23 18:00:50 SRV-PVE-01 corosync[2463950]: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1417.
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: corosync.service: Main process exited, code=exited, status=8/n/a
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: corosync.service: Failed with result 'exit-code'.
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: Failed to start Corosync Cluster Engine.
 
PVE-02

##### pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.102-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.3-3
pve-kernel-5.15.102-1-pve: 5.15.102-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-3
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-1
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.6.3
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.11-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-2
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1



###### NETWORK
auto lo
iface lo inet loopback

iface enp2s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.199.6/30
gateway 192.168.199.5
bridge-ports enp2s0f1
bridge-stp off
bridge-fd 0

iface wlp3s0 inet manual

auto vlan254
iface vlan254 inet static
address 192.168.199.10/30
vlan-raw-device vmbr0


####### /etc/hosts


127.0.0.1 localhost.localdomain localhost
192.168.199.6 pve2.ventosulpiratini.com.br pve2

# The following lines are desirable for IPv6 capable hosts

::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts


####### syslog

Nov 24 05:30:08 pve2 pveupdate[55466]: update new package list: /var/lib/pve-manager/pkgupdates
Nov 24 05:30:08 pve2 pveupdate[55374]: error reading cached package status in '/var/lib/pve-manager/pkgupdates' - can't open '/var/lib/pve-manager/pkgupdates' - No such file or directory
Nov 24 05:30:08 pve2 pveupdate[55374]: <root@pam> end task UPID:pve2:0000D8AA:0025568C:65605ED4:aptupdate::root@pam: OK
Nov 24 05:30:08 pve2 systemd[1]: pve-daily-update.service: Succeeded.
Nov 24 05:30:08 pve2 systemd[1]: Finished Daily PVE download activities.
Nov 24 05:41:47 pve2 systemd[1]: Starting Daily apt download activities...
Nov 24 05:41:47 pve2 systemd[1]: apt-daily.service: Succeeded.
Nov 24 05:41:47 pve2 systemd[1]: Finished Daily apt download activities.
Nov 24 06:17:01 pve2 CRON[61924]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Nov 24 06:25:01 pve2 CRON[62992]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
Nov 24 06:44:29 pve2 systemd[1]: Starting Daily apt upgrade and clean activities...
Nov 24 06:44:29 pve2 systemd[1]: apt-daily-upgrade.service: Succeeded.
Nov 24 06:44:29 pve2 systemd[1]: Finished Daily apt upgrade and clean activities.
Nov 24 07:17:01 pve2 CRON[70026]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Nov 24 08:17:01 pve2 CRON[78017]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Nov 24 09:17:01 pve2 CRON[86008]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Nov 24 09:55:21 pve2 systemd[1]: Created slice User Slice of UID 0.
Nov 24 09:55:21 pve2 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 24 09:55:21 pve2 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 24 09:55:21 pve2 systemd[1]: Starting User Manager for UID 0...
Nov 24 09:55:21 pve2 systemd[91115]: Queued start job for default target Main User Target.
Nov 24 09:55:21 pve2 systemd[91115]: Created slice User Application Slice.
Nov 24 09:55:21 pve2 systemd[91115]: Reached target Paths.
Nov 24 09:55:21 pve2 systemd[91115]: Reached target Timers.
Nov 24 09:55:21 pve2 systemd[91115]: Listening on GnuPG network certificate management daemon.
Nov 24 09:55:21 pve2 systemd[91115]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Nov 24 09:55:21 pve2 systemd[91115]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Nov 24 09:55:21 pve2 systemd[91115]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Nov 24 09:55:21 pve2 systemd[91115]: Listening on GnuPG cryptographic agent and passphrase cache.
Nov 24 09:55:21 pve2 systemd[91115]: Reached target Sockets.
Nov 24 09:55:21 pve2 systemd[91115]: Reached target Basic System.
Nov 24 09:55:21 pve2 systemd[91115]: Reached target Main User Target.
Nov 24 09:55:21 pve2 systemd[91115]: Startup finished in 54ms.
Nov 24 09:55:21 pve2 systemd[1]: Started User Manager for UID 0.
Nov 24 09:55:21 pve2 systemd[1]: Started Session 16 of user root.
 
please post the full output of

Code:
journalctl -b -u corosync -u pve-cluster -u pveproxy -u pvedaemon

for both nodes!
 
PVE-02
Code:
-- Journal begins at Thu 2023-11-23 16:57:08 -03, ends at Fri 2023-11-24 10:22:27 -03. --
Nov 23 22:41:25 pve2 systemd[1]: Starting The Proxmox VE cluster filesystem...
Nov 23 22:41:26 pve2 systemd[1]: Started The Proxmox VE cluster filesystem.
Nov 23 22:41:26 pve2 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Nov 23 22:41:26 pve2 systemd[1]: Starting PVE API Daemon...
Nov 23 22:41:27 pve2 pvedaemon[982]: starting server
Nov 23 22:41:27 pve2 pvedaemon[982]: starting 3 worker(s)
Nov 23 22:41:27 pve2 pvedaemon[982]: worker 983 started
Nov 23 22:41:27 pve2 pvedaemon[982]: worker 984 started
Nov 23 22:41:27 pve2 pvedaemon[982]: worker 985 started
Nov 23 22:41:27 pve2 systemd[1]: Started PVE API Daemon.
Nov 23 22:41:27 pve2 systemd[1]: Starting PVE API Proxy Server...
Nov 23 22:41:28 pve2 pveproxy[991]: starting server
Nov 23 22:41:28 pve2 pveproxy[991]: starting 3 worker(s)
Nov 23 22:41:28 pve2 systemd[1]: Started PVE API Proxy Server.
Nov 23 22:41:28 pve2 pveproxy[991]: worker 992 started
Nov 23 22:41:28 pve2 pveproxy[991]: worker 993 started
Nov 23 22:41:28 pve2 pveproxy[991]: worker 994 started
 
PVE-01
Code:
-- Journal begins at Tue 2023-05-02 05:37:27 -03, ends at Fri 2023-11-24 10:13:08 -03. --
Nov 13 08:16:54 SRV-PVE-01 systemd[1]: Starting The Proxmox VE cluster filesystem...
Nov 13 08:16:55 SRV-PVE-01 systemd[1]: Started The Proxmox VE cluster filesystem.
Nov 13 08:16:55 SRV-PVE-01 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Nov 13 08:16:55 SRV-PVE-01 systemd[1]: Starting PVE API Daemon...
Nov 13 08:16:57 SRV-PVE-01 pvedaemon[1092]: starting server
Nov 13 08:16:57 SRV-PVE-01 pvedaemon[1092]: starting 3 worker(s)
Nov 13 08:16:57 SRV-PVE-01 pvedaemon[1092]: worker 1093 started
Nov 13 08:16:57 SRV-PVE-01 pvedaemon[1092]: worker 1094 started
Nov 13 08:16:57 SRV-PVE-01 pvedaemon[1092]: worker 1095 started
Nov 13 08:16:57 SRV-PVE-01 systemd[1]: Started PVE API Daemon.
Nov 13 08:16:57 SRV-PVE-01 systemd[1]: Starting PVE API Proxy Server...
Nov 13 08:16:59 SRV-PVE-01 pveproxy[1101]: starting server
Nov 13 08:16:59 SRV-PVE-01 pveproxy[1101]: starting 3 worker(s)
Nov 13 08:16:59 SRV-PVE-01 pveproxy[1101]: worker 1102 started
Nov 13 08:16:59 SRV-PVE-01 pveproxy[1101]: worker 1103 started
Nov 13 08:16:59 SRV-PVE-01 pveproxy[1101]: worker 1104 started
Nov 13 08:16:59 SRV-PVE-01 systemd[1]: Started PVE API Proxy Server.
Nov 13 09:29:09 SRV-PVE-01 IPCC.xs[1095]: pam_unix(proxmox-ve-auth:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=  user=ro>
Nov 13 09:29:11 SRV-PVE-01 pvedaemon[1095]: authentication failure; rhost=::ffff:192.168.160.30 user=root@pam msg=Authentication failure
Nov 13 09:29:19 SRV-PVE-01 pvedaemon[1093]: <root@pam> successful auth for user 'root@pam'
Nov 13 09:29:40 SRV-PVE-01 pvedaemon[1094]: <root@pam> starting task UPID:SRV-PVE-01:000034E5:0006CF7F:655216B4:qmstart:106:root@pam:
Nov 13 09:29:40 SRV-PVE-01 pvedaemon[13541]: start VM 106: UPID:SRV-PVE-01:000034E5:0006CF7F:655216B4:qmstart:106:root@pam:
Nov 13 09:29:41 SRV-PVE-01 pvedaemon[1094]: <root@pam> end task UPID:SRV-PVE-01:000034E5:0006CF7F:655216B4:qmstart:106:root@pam: OK
Nov 13 09:44:03 SRV-PVE-01 pvedaemon[1094]: <root@pam> successful auth for user 'root@pam'
Nov 13 10:34:59 SRV-PVE-01 pvedaemon[1093]: <root@pam> successful auth for user 'root@pam'
Nov 13 11:06:10 SRV-PVE-01 pvedaemon[1095]: <root@pam> successful auth for user 'root@pam'
Nov 13 11:07:24 SRV-PVE-01 pveproxy[1102]: worker exit
Nov 13 11:07:24 SRV-PVE-01 pveproxy[1101]: worker 1102 finished
Nov 13 11:07:24 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 13 11:07:24 SRV-PVE-01 pveproxy[1101]: worker 29753 started
Nov 13 15:50:29 SRV-PVE-01 IPCC.xs[1094]: pam_unix(proxmox-ve-auth:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=  user=ro>
Nov 13 15:50:31 SRV-PVE-01 pvedaemon[1094]: authentication failure; rhost=::ffff:192.168.160.30 user=root@pam msg=Authentication failure
Nov 14 00:00:27 SRV-PVE-01 systemd[1]: Reloading PVE API Proxy Server.
Nov 14 00:00:29 SRV-PVE-01 pveproxy[156884]: send HUP to 1101
Nov 14 00:00:29 SRV-PVE-01 pveproxy[1101]: received signal HUP
Nov 14 00:00:29 SRV-PVE-01 systemd[1]: Reloaded PVE API Proxy Server.
Nov 14 00:00:29 SRV-PVE-01 pveproxy[1101]: server closing
Nov 14 00:00:29 SRV-PVE-01 pveproxy[1101]: server shutdown (restart)
Nov 14 00:00:30 SRV-PVE-01 pveproxy[1101]: restarting server
Nov 14 00:00:30 SRV-PVE-01 pveproxy[1101]: starting 3 worker(s)
Nov 14 00:00:30 SRV-PVE-01 pveproxy[1101]: worker 156929 started
Nov 14 00:00:30 SRV-PVE-01 pveproxy[1101]: worker 156930 started
Nov 14 00:00:30 SRV-PVE-01 pveproxy[1101]: worker 156931 started
Nov 14 00:00:35 SRV-PVE-01 pveproxy[29753]: worker exit
Nov 14 00:00:35 SRV-PVE-01 pveproxy[1104]: worker exit
Nov 14 00:00:35 SRV-PVE-01 pveproxy[1103]: worker exit
Nov 14 00:00:35 SRV-PVE-01 pveproxy[1101]: worker 29753 finished
Nov 14 00:00:35 SRV-PVE-01 pveproxy[1101]: worker 1103 finished
Nov 14 00:00:35 SRV-PVE-01 pveproxy[1101]: worker 1104 finished
Nov 16 18:06:41 SRV-PVE-01 IPCC.xs[1094]: pam_unix(proxmox-ve-auth:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=  user=ro>
Nov 16 18:06:43 SRV-PVE-01 pvedaemon[1094]: authentication failure; rhost=::ffff:192.168.160.30 user=root@pam msg=Authentication failure
Nov 16 18:06:54 SRV-PVE-01 pvedaemon[1094]: <root@pam> successful auth for user 'root@pam'
Nov 16 18:07:20 SRV-PVE-01 pvedaemon[1095]: <root@pam> starting task UPID:SRV-PVE-01:000C3CE7:01C1B67C:65568488:vncshell::root@pam:
Nov 16 18:07:20 SRV-PVE-01 pvedaemon[802023]: starting vnc proxy UPID:SRV-PVE-01:000C3CE7:01C1B67C:65568488:vncshell::root@pam:
Nov 16 18:07:20 SRV-PVE-01 pvedaemon[802023]: launch command: /usr/bin/vncterm -rfbport 5900 -timeout 10 -authpath /nodes/SRV-PVE-01 -perm Sys.Cons>
Nov 16 18:07:21 SRV-PVE-01 login[802050]: pam_unix(login:session): session opened for user root(uid=0) by (uid=0)
Nov 16 18:09:24 SRV-PVE-01 pvedaemon[802388]: start VM 102: UPID:SRV-PVE-01:000C3E54:01C1E6B2:65568504:qmstart:102:root@pam:
Nov 16 18:09:24 SRV-PVE-01 pvedaemon[1093]: <root@pam> starting task UPID:SRV-PVE-01:000C3E54:01C1E6B2:65568504:qmstart:102:root@pam:
Nov 16 18:09:25 SRV-PVE-01 pvedaemon[1093]: <root@pam> end task UPID:SRV-PVE-01:000C3E54:01C1E6B2:65568504:qmstart:102:root@pam: OK
Nov 16 18:21:28 SRV-PVE-01 pvedaemon[1093]: <root@pam> successful auth for user 'root@pam'
Nov 16 18:35:59 SRV-PVE-01 pvedaemon[1095]: <root@pam> end task UPID:SRV-PVE-01:000C3CE7:01C1B67C:65568488:vncshell::root@pam: OK
Nov 16 18:36:29 SRV-PVE-01 pvedaemon[1093]: <root@pam> successful auth for user 'root@pam'
Nov 16 18:51:53 SRV-PVE-01 pvedaemon[1094]: <root@pam> successful auth for user 'root@pam'
Nov 16 18:54:49 SRV-PVE-01 pveproxy[156929]: worker exit
Nov 16 18:54:49 SRV-PVE-01 pveproxy[1101]: worker 156929 finished
Nov 16 18:54:49 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 16 18:54:49 SRV-PVE-01 pveproxy[1101]: worker 809648 started
Nov 16 18:58:22 SRV-PVE-01 pveproxy[1101]: worker 156930 finished
Nov 16 18:58:22 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 16 18:58:22 SRV-PVE-01 pveproxy[1101]: worker 810209 started
Nov 16 18:58:23 SRV-PVE-01 pveproxy[810208]: worker exit
Nov 16 18:59:33 SRV-PVE-01 pveproxy[156931]: worker exit
Nov 16 18:59:33 SRV-PVE-01 pveproxy[1101]: worker 156931 finished
Nov 16 18:59:33 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 16 18:59:33 SRV-PVE-01 pveproxy[1101]: worker 810384 started
Nov 16 19:03:18 SRV-PVE-01 pvedaemon[1094]: worker exit
 
PVE-01
CONTINUE
Code:
Nov 16 19:03:19 SRV-PVE-01 pvedaemon[1092]: worker 810965 started
Nov 16 20:06:03 SRV-PVE-01 pvedaemon[1095]: <root@pam> successful auth for user 'root@pam'
Nov 17 00:00:31 SRV-PVE-01 systemd[1]: Reloading PVE API Proxy Server.
Nov 17 00:00:32 SRV-PVE-01 pveproxy[857860]: send HUP to 1101
Nov 17 00:00:32 SRV-PVE-01 pveproxy[1101]: received signal HUP
Nov 17 00:00:32 SRV-PVE-01 pveproxy[1101]: server closing
Nov 17 00:00:32 SRV-PVE-01 pveproxy[1101]: server shutdown (restart)
Nov 17 00:00:32 SRV-PVE-01 systemd[1]: Reloaded PVE API Proxy Server.
Nov 17 00:00:33 SRV-PVE-01 pveproxy[1101]: restarting server
Nov 17 00:00:33 SRV-PVE-01 pveproxy[1101]: starting 3 worker(s)
Nov 17 00:00:33 SRV-PVE-01 pveproxy[1101]: worker 857897 started
Nov 17 00:00:33 SRV-PVE-01 pveproxy[1101]: worker 857898 started
Nov 17 00:00:33 SRV-PVE-01 pveproxy[1101]: worker 857899 started
Nov 17 00:00:38 SRV-PVE-01 pveproxy[810209]: worker exit
Nov 17 00:00:38 SRV-PVE-01 pveproxy[810384]: worker exit
Nov 17 00:00:39 SRV-PVE-01 pveproxy[809648]: worker exit
Nov 17 00:00:39 SRV-PVE-01 pveproxy[1101]: worker 809648 finished
Nov 17 00:00:39 SRV-PVE-01 pveproxy[1101]: worker 810209 finished
Nov 17 00:00:39 SRV-PVE-01 pveproxy[1101]: worker 810384 finished
Nov 18 09:26:58 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 18 09:27:06 SRV-PVE-01 pvedaemon[1093]: unable to activate storage 'hd-500' - directory is expected to be a mount point but is not mounted: '/m>
Nov 18 09:27:26 SRV-PVE-01 pvedaemon[1184463]: starting termproxy UPID:SRV-PVE-01:001212CF:0299C954:6558ADAE:vncshell::root@pam:
Nov 18 09:27:26 SRV-PVE-01 pvedaemon[1095]: <root@pam> starting task UPID:SRV-PVE-01:001212CF:0299C954:6558ADAE:vncshell::root@pam:
Nov 18 09:27:27 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 18 09:27:27 SRV-PVE-01 login[1184468]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Nov 18 09:41:49 SRV-PVE-01 pvedaemon[1095]: <root@pam> successful auth for user 'root@pam'
Nov 18 09:49:49 SRV-PVE-01 pvedaemon[1095]: <root@pam> end task UPID:SRV-PVE-01:001212CF:0299C954:6558ADAE:vncshell::root@pam: OK
Nov 18 10:05:20 SRV-PVE-01 pvedaemon[1093]: <root@pam> successful auth for user 'root@pam'
Nov 18 10:24:03 SRV-PVE-01 pvedaemon[1093]: <root@pam> successful auth for user 'root@pam'
Nov 18 10:47:27 SRV-PVE-01 pvedaemon[1095]: <root@pam> successful auth for user 'root@pam'
Nov 18 10:47:50 SRV-PVE-01 pvedaemon[1095]: <root@pam> starting task UPID:SRV-PVE-01:0012421B:02A12570:6558C086:qmstart:102:root@pam:
Nov 18 10:47:50 SRV-PVE-01 pvedaemon[1196571]: start VM 102: UPID:SRV-PVE-01:0012421B:02A12570:6558C086:qmstart:102:root@pam:
Nov 18 10:47:51 SRV-PVE-01 pvedaemon[1095]: <root@pam> end task UPID:SRV-PVE-01:0012421B:02A12570:6558C086:qmstart:102:root@pam: OK
Nov 18 15:39:08 SRV-PVE-01 pvedaemon[1093]: <root@pam> successful auth for user 'root@pam'
Nov 18 15:39:17 SRV-PVE-01 pvedaemon[1095]: <root@pam> starting task UPID:SRV-PVE-01:0012FB7B:02BBD482:655904D5:vncshell::root@pam:
Nov 18 15:39:17 SRV-PVE-01 pvedaemon[1244027]: starting termproxy UPID:SRV-PVE-01:0012FB7B:02BBD482:655904D5:vncshell::root@pam:
Nov 18 15:39:17 SRV-PVE-01 pvedaemon[1095]: <root@pam> successful auth for user 'root@pam'
Nov 18 15:39:17 SRV-PVE-01 login[1244032]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Nov 18 15:39:20 SRV-PVE-01 pvedaemon[1093]: VM 102 qmp command failed - VM 102 qmp command 'query-proxmox-support' failed - unable to connect to VM>
Nov 18 15:39:53 SRV-PVE-01 pvedaemon[1095]: <root@pam> end task UPID:SRV-PVE-01:0012FB7B:02BBD482:655904D5:vncshell::root@pam: OK
Nov 18 15:39:56 SRV-PVE-01 pvedaemon[1244161]: start VM 102: UPID:SRV-PVE-01:0012FC01:02BBE3A7:655904FC:qmstart:102:root@pam:
Nov 18 15:39:56 SRV-PVE-01 pvedaemon[1093]: <root@pam> starting task UPID:SRV-PVE-01:0012FC01:02BBE3A7:655904FC:qmstart:102:root@pam:
Nov 18 15:39:57 SRV-PVE-01 pvedaemon[1093]: <root@pam> end task UPID:SRV-PVE-01:0012FC01:02BBE3A7:655904FC:qmstart:102:root@pam: OK
Nov 18 15:41:41 SRV-PVE-01 pvedaemon[1095]: worker exit
Nov 18 15:41:41 SRV-PVE-01 pvedaemon[1092]: worker 1095 finished
Nov 18 15:41:41 SRV-PVE-01 pvedaemon[1092]: starting 1 worker(s)
Nov 18 15:41:41 SRV-PVE-01 pvedaemon[1092]: worker 1244549 started
Nov 18 15:47:08 SRV-PVE-01 pvedaemon[1093]: worker exit
Nov 18 15:47:08 SRV-PVE-01 pvedaemon[1092]: worker 1093 finished
Nov 18 15:47:08 SRV-PVE-01 pvedaemon[1092]: starting 1 worker(s)
Nov 18 15:47:08 SRV-PVE-01 pvedaemon[1092]: worker 1245437 started
Nov 18 15:47:47 SRV-PVE-01 pveproxy[857899]: worker exit
Nov 18 15:47:47 SRV-PVE-01 pveproxy[1101]: worker 857899 finished
Nov 18 15:47:47 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 18 15:47:47 SRV-PVE-01 pveproxy[1101]: worker 1245545 started
Nov 18 15:54:00 SRV-PVE-01 pvedaemon[1245437]: <root@pam> successful auth for user 'root@pam'
Nov 18 16:00:40 SRV-PVE-01 pveproxy[857898]: worker exit
Nov 18 16:00:40 SRV-PVE-01 pveproxy[1101]: worker 857898 finished
Nov 18 16:00:40 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 18 16:00:40 SRV-PVE-01 pveproxy[1101]: worker 1247561 started
Nov 18 16:05:47 SRV-PVE-01 pveproxy[857897]: worker exit
Nov 18 16:05:47 SRV-PVE-01 pveproxy[1101]: worker 857897 finished
Nov 18 16:05:47 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 18 16:05:47 SRV-PVE-01 pveproxy[1101]: worker 1248419 started
Nov 18 16:09:00 SRV-PVE-01 pvedaemon[1244549]: <root@pam> successful auth for user 'root@pam'
Nov 19 00:00:28 SRV-PVE-01 systemd[1]: Reloading PVE API Proxy Server.
Nov 19 00:00:29 SRV-PVE-01 pveproxy[1326632]: send HUP to 1101
Nov 19 00:00:29 SRV-PVE-01 pveproxy[1101]: received signal HUP
Nov 19 00:00:29 SRV-PVE-01 pveproxy[1101]: server closing
Nov 19 00:00:29 SRV-PVE-01 pveproxy[1101]: server shutdown (restart)
Nov 19 00:00:29 SRV-PVE-01 systemd[1]: Reloaded PVE API Proxy Server.
Nov 19 00:00:30 SRV-PVE-01 pveproxy[1101]: restarting server
Nov 19 00:00:30 SRV-PVE-01 pveproxy[1101]: starting 3 worker(s)
Nov 19 00:00:30 SRV-PVE-01 pveproxy[1101]: worker 1326661 started
Nov 19 00:00:30 SRV-PVE-01 pveproxy[1101]: worker 1326662 started
Nov 19 00:00:30 SRV-PVE-01 pveproxy[1101]: worker 1326663 started
Nov 19 00:00:35 SRV-PVE-01 pveproxy[1247561]: worker exit
Nov 19 00:00:35 SRV-PVE-01 pveproxy[1248419]: worker exit
Nov 19 00:00:35 SRV-PVE-01 pveproxy[1245545]: worker exit
Nov 19 00:00:35 SRV-PVE-01 pveproxy[1101]: worker 1247561 finished
Nov 19 00:00:35 SRV-PVE-01 pveproxy[1101]: worker 1248419 finished
Nov 19 00:00:35 SRV-PVE-01 pveproxy[1101]: worker 1245545 finished
Nov 20 15:34:57 SRV-PVE-01 IPCC.xs[1245437]: pam_unix(proxmox-ve-auth:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=  user>
Nov 20 15:34:59 SRV-PVE-01 pvedaemon[1245437]: authentication failure; rhost=::ffff:10.10.10.10 user=root@pam msg=Authentication failure
Nov 21 00:00:24 SRV-PVE-01 systemd[1]: Reloading PVE API Proxy Server.
Nov 21 00:00:25 SRV-PVE-01 pveproxy[1802891]: send HUP to 1101
Nov 21 00:00:25 SRV-PVE-01 pveproxy[1101]: received signal HUP
Nov 21 00:00:25 SRV-PVE-01 pveproxy[1101]: server closing
Nov 21 00:00:25 SRV-PVE-01 pveproxy[1101]: server shutdown (restart)
Nov 21 00:00:25 SRV-PVE-01 systemd[1]: Reloaded PVE API Proxy Server.
Nov 21 00:00:26 SRV-PVE-01 pveproxy[1101]: restarting server
Nov 21 00:00:26 SRV-PVE-01 pveproxy[1101]: starting 3 worker(s)
Nov 21 00:00:26 SRV-PVE-01 pveproxy[1101]: worker 1802912 started
Nov 21 00:00:26 SRV-PVE-01 pveproxy[1101]: worker 1802913 started
Nov 21 00:00:26 SRV-PVE-01 pveproxy[1101]: worker 1802914 started
Nov 21 00:00:31 SRV-PVE-01 pveproxy[1326662]: worker exit
Nov 21 00:00:31 SRV-PVE-01 pveproxy[1326661]: worker exit
Nov 21 00:00:31 SRV-PVE-01 pveproxy[1326663]: worker exit
Nov 21 00:00:31 SRV-PVE-01 pveproxy[1101]: worker 1326663 finished
Nov 21 00:00:31 SRV-PVE-01 pveproxy[1101]: worker 1326662 finished
Nov 21 00:00:31 SRV-PVE-01 pveproxy[1101]: worker 1326661 finished
Nov 23 14:12:01 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 23 14:26:55 SRV-PVE-01 pvedaemon[1244549]: <root@pam> successful auth for user 'root@pam'
Nov 23 14:41:56 SRV-PVE-01 pvedaemon[1245437]: <root@pam> successful auth for user 'root@pam'
Nov 23 15:05:49 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 23 15:20:50 SRV-PVE-01 pvedaemon[1245437]: <root@pam> successful auth for user 'root@pam'
Nov 23 15:36:50 SRV-PVE-01 pvedaemon[1244549]: <root@pam> successful auth for user 'root@pam'
Nov 23 15:39:35 SRV-PVE-01 pveproxy[1802913]: worker exit
Nov 23 15:39:35 SRV-PVE-01 pveproxy[1101]: worker 1802913 finished
Nov 23 15:39:35 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 23 15:39:35 SRV-PVE-01 pveproxy[1101]: worker 2439712 started
Nov 23 15:47:55 SRV-PVE-01 pveproxy[1802912]: worker exit
Nov 23 15:47:55 SRV-PVE-01 pveproxy[1101]: worker 1802912 finished
Nov 23 15:47:55 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 23 15:47:55 SRV-PVE-01 pveproxy[1101]: worker 2441080 started
Nov 23 15:52:50 SRV-PVE-01 pvedaemon[1244549]: <root@pam> successful auth for user 'root@pam'
Nov 23 16:07:50 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 23 16:22:50 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 23 16:38:50 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 23 16:50:49 SRV-PVE-01 pveproxy[2439712]: worker exit
Nov 23 16:50:49 SRV-PVE-01 pveproxy[1101]: worker 2439712 finished
Nov 23 16:50:49 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 23 16:50:49 SRV-PVE-01 pveproxy[1101]: worker 2451528 started
Nov 23 16:50:50 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 23 16:51:43 SRV-PVE-01 pvedaemon[1244549]: <root@pam> starting task UPID:SRV-PVE-01:002568D5:0555A3FD:655FAD4F:srvreload:networking:root@pam:
Nov 23 16:51:44 SRV-PVE-01 pvedaemon[1244549]: <root@pam> end task UPID:SRV-PVE-01:002568D5:0555A3FD:655FAD4F:srvreload:networking:root@pam: OK
Nov 23 17:01:30 SRV-PVE-01 pveproxy[2441080]: worker exit
Nov 23 17:01:30 SRV-PVE-01 pveproxy[1101]: worker 2441080 finished
Nov 23 17:01:30 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 23 17:01:30 SRV-PVE-01 pveproxy[1101]: worker 2453662 started
Nov 23 17:01:59 SRV-PVE-01 pveproxy[1802914]: worker exit
Nov 23 17:01:59 SRV-PVE-01 pveproxy[1101]: worker 1802914 finished
Nov 23 17:01:59 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 23 17:01:59 SRV-PVE-01 pveproxy[1101]: worker 2453741 started
Nov 23 17:03:54 SRV-PVE-01 pvedaemon[1245437]: <root@pam> update VM 102: -description VLAN 255
Nov 23 17:04:15 SRV-PVE-01 pvedaemon[1244549]: <root@pam> update VM 100: -description VLAN 251
Nov 23 17:04:52 SRV-PVE-01 pvedaemon[1244549]: <root@pam> update VM 106: -description VLAN 257
Nov 23 17:05:42 SRV-PVE-01 pvedaemon[1244549]: <root@pam> update VM 107: -description VLAN 254
Nov 23 17:05:50 SRV-PVE-01 pvedaemon[1245437]: <root@pam> successful auth for user 'root@pam'
Nov 23 17:08:08 SRV-PVE-01 pvedaemon[1245437]: <root@pam> starting task UPID:SRV-PVE-01:002574F2:055724D1:655FB128:srvreload:networking:root@pam:
Nov 23 17:08:09 SRV-PVE-01 pvedaemon[1245437]: <root@pam> end task UPID:SRV-PVE-01:002574F2:055724D1:655FB128:srvreload:networking:root@pam: OK
Nov 23 17:15:39 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 23 17:30:40 SRV-PVE-01 pvedaemon[810965]: <root@pam> successful auth for user 'root@pam'
Nov 23 17:44:10 SRV-PVE-01 pveproxy[2451528]: worker exit
Nov 23 17:44:10 SRV-PVE-01 pveproxy[1101]: worker 2451528 finished
Nov 23 17:44:10 SRV-PVE-01 pveproxy[1101]: starting 1 worker(s)
Nov 23 17:44:10 SRV-PVE-01 pveproxy[1101]: worker 2461277 started
Nov 23 17:45:41 SRV-PVE-01 pvedaemon[1244549]: <root@pam> successful auth for user 'root@pam'
Nov 23 17:47:16 SRV-PVE-01 pvedaemon[1245437]: <root@pam> starting task UPID:SRV-PVE-01:0025905E:055ABA1D:655FBA54:clustercreate:Cluster-01:root@pa>
Nov 23 17:47:16 SRV-PVE-01 systemd[1]: Condition check resulted in Corosync Cluster Engine being skipped.
Nov 23 17:47:16 SRV-PVE-01 systemd[1]: Stopping The Proxmox VE cluster filesystem...
Nov 23 17:47:16 SRV-PVE-01 pmxcfs[989]: [main] notice: teardown filesystem
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[989]: [main] notice: exit proxmox configuration filesystem (0)
Nov 23 17:47:17 SRV-PVE-01 systemd[1]: pve-cluster.service: Succeeded.
Nov 23 17:47:17 SRV-PVE-01 systemd[1]: Stopped The Proxmox VE cluster filesystem.
Nov 23 17:47:17 SRV-PVE-01 systemd[1]: pve-cluster.service: Consumed 14min 12.002s CPU time.
Nov 23 17:47:17 SRV-PVE-01 systemd[1]: Starting The Proxmox VE cluster filesystem...
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461795]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 1)
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461795]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 1)
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461796]: [quorum] crit: quorum_initialize failed: 2
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461796]: [quorum] crit: can't initialize service
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461796]: [confdb] crit: cmap_initialize failed: 2
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461796]: [confdb] crit: can't initialize service
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461796]: [dcdb] crit: cpg_initialize failed: 2
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461796]: [dcdb] crit: can't initialize service
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461796]: [status] crit: cpg_initialize failed: 2
Nov 23 17:47:17 SRV-PVE-01 pmxcfs[2461796]: [status] crit: can't initialize service
 
PVE-01
Code:
Nov 23 17:47:18 SRV-PVE-01 systemd[1]: Started The Proxmox VE cluster filesystem.
Nov 23 17:47:18 SRV-PVE-01 pvedaemon[1245437]: <root@pam> end task UPID:SRV-PVE-01:0025905E:055ABA1D:655FBA54:clustercreate:Cluster-01:root@pam: OK
Nov 23 17:47:18 SRV-PVE-01 systemd[1]: Starting Corosync Cluster Engine...
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [MAIN  ] Corosync Cluster Engine 3.1.5 starting up
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [MAIN  ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf vqsim nozzle snmp pie>
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [TOTEM ] Initializing transport (Kronosnet).
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [TOTEM ] totemknet initialized
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [KNET  ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.>
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine loaded: corosync configuration map access [0]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QB    ] server name: cmap
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine loaded: corosync configuration service [1]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QB    ] server name: cfg
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QB    ] server name: cpg
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine loaded: corosync profile loading service [4]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine loaded: corosync resource monitoring service [6]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [WD    ] Watchdog not enabled by configuration
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [WD    ] resource load_15min missing a recovery key.
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [WD    ] resource memory_used missing a recovery key.
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [WD    ] no resources configured.
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine loaded: corosync watchdog service [7]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QUORUM] Using quorum provider corosync_votequorum
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QUORUM] This node is within the primary component and will provide service.
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QUORUM] Members[0]:
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QB    ] server name: votequorum
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QB    ] server name: quorum
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [TOTEM ] Configuring link 0
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [TOTEM ] Configured link number 0: local addr: 192.168.199.9, port=5405
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QUORUM] Sync members[1]: 1
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QUORUM] Sync joined[1]: 1
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [TOTEM ] A new membership (1.5) was formed. Members joined: 1
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [QUORUM] Members[1]: 1
Nov 23 17:47:18 SRV-PVE-01 corosync[2461801]:   [MAIN  ] Completed service synchronization, ready to provide service.
Nov 23 17:47:18 SRV-PVE-01 systemd[1]: Started Corosync Cluster Engine.
Nov 23 17:47:23 SRV-PVE-01 pmxcfs[2461796]: [status] notice: update cluster info (cluster name  Cluster-01, version = 1)
Nov 23 17:47:23 SRV-PVE-01 pmxcfs[2461796]: [status] notice: node has quorum
Nov 23 17:47:23 SRV-PVE-01 pmxcfs[2461796]: [dcdb] notice: members: 1/2461796
Nov 23 17:47:23 SRV-PVE-01 pmxcfs[2461796]: [dcdb] notice: all data is up to date
Nov 23 17:47:23 SRV-PVE-01 pmxcfs[2461796]: [status] notice: members: 1/2461796
Nov 23 17:47:23 SRV-PVE-01 pmxcfs[2461796]: [status] notice: all data is up to date
Nov 23 17:50:30 SRV-PVE-01 pvedaemon[1245437]: <root@pam> successful auth for user 'root@pam'
Nov 23 17:54:11 SRV-PVE-01 pvedaemon[1244549]: <root@pam> starting task UPID:SRV-PVE-01:002594C7:055B5C02:655FBBF3:vncshell::root@pam:
Nov 23 17:54:11 SRV-PVE-01 pvedaemon[2462919]: starting termproxy UPID:SRV-PVE-01:002594C7:055B5C02:655FBBF3:vncshell::root@pam:
Nov 23 17:54:11 SRV-PVE-01 pvedaemon[1245437]: <root@pam> successful auth for user 'root@pam'
Nov 23 17:54:11 SRV-PVE-01 login[2462924]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: Stopping The Proxmox VE cluster filesystem...
Nov 23 17:54:45 SRV-PVE-01 pmxcfs[2461796]: [main] notice: teardown filesystem
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: Stopping Corosync Cluster Engine...
Nov 23 17:54:45 SRV-PVE-01 corosync-cfgtool[2463034]: Shutting down corosync
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [CFG   ] Node 1 was shut down by sysadmin
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Unloading all Corosync service engines.
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [QB    ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine unloaded: corosync vote quorum service v1.0
Nov 23 17:54:45 SRV-PVE-01 pmxcfs[2461796]: [confdb] crit: cmap_dispatch failed: 2
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [QB    ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine unloaded: corosync configuration map access
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [QB    ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine unloaded: corosync configuration service
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [QB    ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine unloaded: corosync cluster closed process group service v1.01
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [QB    ] withdrawing server sockets
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine unloaded: corosync cluster quorum service v0.1
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine unloaded: corosync profile loading service
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine unloaded: corosync resource monitoring service
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [SERV  ] Service engine unloaded: corosync watchdog service
Nov 23 17:54:45 SRV-PVE-01 pmxcfs[2461796]: [quorum] crit: quorum_dispatch failed: 2
Nov 23 17:54:45 SRV-PVE-01 pmxcfs[2461796]: [status] notice: node lost quorum
Nov 23 17:54:45 SRV-PVE-01 pmxcfs[2461796]: [dcdb] crit: cpg_dispatch failed: 2
Nov 23 17:54:45 SRV-PVE-01 pmxcfs[2461796]: [dcdb] crit: cpg_leave failed: 2
Nov 23 17:54:45 SRV-PVE-01 pmxcfs[2461796]: [status] crit: cpg_dispatch failed: 2
Nov 23 17:54:45 SRV-PVE-01 pmxcfs[2461796]: [status] crit: cpg_leave failed: 2
Nov 23 17:54:45 SRV-PVE-01 corosync[2461801]:   [MAIN  ] Corosync Cluster Engine exiting normally
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: corosync.service: Succeeded.
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: Stopped Corosync Cluster Engine.
Nov 23 17:54:45 SRV-PVE-01 systemd[1]: corosync.service: Consumed 3.077s CPU time.
Nov 23 17:54:46 SRV-PVE-01 pmxcfs[2461796]: [quorum] crit: quorum_finalize failed: 9
Nov 23 17:54:46 SRV-PVE-01 pmxcfs[2461796]: [confdb] crit: cmap_track_delete nodelist failed: 9
Nov 23 17:54:46 SRV-PVE-01 pmxcfs[2461796]: [confdb] crit: cmap_track_delete version failed: 9
Nov 23 17:54:46 SRV-PVE-01 pmxcfs[2461796]: [confdb] crit: cmap_finalize failed: 9
Nov 23 17:54:46 SRV-PVE-01 pmxcfs[2461796]: [main] notice: exit proxmox configuration filesystem (0)
Nov 23 17:54:46 SRV-PVE-01 systemd[1]: pve-cluster.service: Succeeded.
Nov 23 17:54:46 SRV-PVE-01 systemd[1]: Stopped The Proxmox VE cluster filesystem.
Nov 23 17:54:57 SRV-PVE-01 pveproxy[2461277]: ipcc_send_rec[1] failed: Connection refused
Nov 23 17:54:57 SRV-PVE-01 pveproxy[2461277]: ipcc_send_rec[2] failed: Connection refused
Nov 23 17:54:57 SRV-PVE-01 pveproxy[2461277]: ipcc_send_rec[3] failed: Connection refused
Nov 23 17:54:58 SRV-PVE-01 pveproxy[2453741]: ipcc_send_rec[1] failed: Connection refused
Nov 23 17:54:58 SRV-PVE-01 pveproxy[2453741]: ipcc_send_rec[2] failed: Connection refused
Nov 23 17:54:58 SRV-PVE-01 pveproxy[2453741]: ipcc_send_rec[3] failed: Connection refused
Nov 23 17:54:59 SRV-PVE-01 pveproxy[2453662]: ipcc_send_rec[1] failed: Connection refused
Nov 23 17:54:59 SRV-PVE-01 pveproxy[2453662]: ipcc_send_rec[2] failed: Connection refused
Nov 23 17:54:59 SRV-PVE-01 pveproxy[2453662]: ipcc_send_rec[3] failed: Connection refused
Nov 23 18:00:40 SRV-PVE-01 pveproxy[2453662]: ipcc_send_rec[1] failed: Connection refused
Nov 23 18:00:40 SRV-PVE-01 pveproxy[2453662]: ipcc_send_rec[2] failed: Connection refused
Nov 23 18:00:40 SRV-PVE-01 pveproxy[2453662]: ipcc_send_rec[3] failed: Connection refused
Nov 23 18:00:41 SRV-PVE-01 pveproxy[2461277]: ipcc_send_rec[1] failed: Connection refused
Nov 23 18:00:41 SRV-PVE-01 pveproxy[2461277]: ipcc_send_rec[2] failed: Connection refused
Nov 23 18:00:41 SRV-PVE-01 pveproxy[2461277]: ipcc_send_rec[3] failed: Connection refused
Nov 23 18:00:49 SRV-PVE-01 systemd[1]: Starting The Proxmox VE cluster filesystem...
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463942]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 1)
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463942]: [dcdb] notice: wrote new corosync config '/etc/corosync/corosync.conf' (version = 1)
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: can't initialize service
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: can't initialize service
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: can't initialize service
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463944]: [status] crit: cpg_initialize failed: 2
Nov 23 18:00:49 SRV-PVE-01 pmxcfs[2463944]: [status] crit: can't initialize service
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: Started The Proxmox VE cluster filesystem.
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: Starting Corosync Cluster Engine...
Nov 23 18:00:50 SRV-PVE-01 corosync[2463950]:   [MAIN  ] Corosync Cluster Engine 3.1.5 starting up
Nov 23 18:00:50 SRV-PVE-01 corosync[2463950]:   [MAIN  ] Corosync built-in features: dbus monitoring watchdog systemd xmlconf vqsim nozzle snmp pie>
Nov 23 18:00:50 SRV-PVE-01 corosync[2463950]:   [MAIN  ] Could not open /etc/corosync/authkey: No such file or directory
Nov 23 18:00:50 SRV-PVE-01 corosync[2463950]:   [MAIN  ] Corosync Cluster Engine exiting with status 8 at main.c:1417.
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: corosync.service: Main process exited, code=exited, status=8/n/a
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: corosync.service: Failed with result 'exit-code'.
Nov 23 18:00:50 SRV-PVE-01 systemd[1]: Failed to start Corosync Cluster Engine.
Nov 23 18:00:55 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 23 18:00:55 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 23 18:00:55 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
Nov 23 18:00:55 SRV-PVE-01 pmxcfs[2463944]: [status] crit: cpg_initialize failed: 2
Nov 23 18:00:57 SRV-PVE-01 pvedaemon[1244549]: <root@pam> end task UPID:SRV-PVE-01:002594C7:055B5C02:655FBBF3:vncshell::root@pam: OK
Nov 23 18:01:01 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 23 18:01:01 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 23 18:01:01 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
Nov 23 18:01:01 SRV-PVE-01 pmxcfs[2463944]: [status] crit: cpg_initialize failed: 2
Nov 23 18:01:03 SRV-PVE-01 pvedaemon[1245437]: <root@pam> successful auth for user 'root@pam'
Nov 23 18:01:07 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 23 18:01:07 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 23 18:01:07 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
Nov 23 18:01:07 SRV-PVE-01 pmxcfs[2463944]: [status] crit: cpg_initialize failed: 2
Nov 23 18:01:13 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 23 18:01:13 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 23 18:01:13 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
Nov 23 18:01:13 SRV-PVE-01 pmxcfs[2463944]: [status] crit: cpg_initialize failed: 2
Nov 23 18:01:19 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 23 18:01:19 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 23 18:01:19 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
Nov 23 18:01:19 SRV-PVE-01 pmxcfs[2463944]: [status] crit: cpg_initialize failed: 2
Nov 23 18:01:25 SRV-PVE-01 pmxcfs[2463944]: [quorum] crit: quorum_initialize failed: 2
Nov 23 18:01:25 SRV-PVE-01 pmxcfs[2463944]: [confdb] crit: cmap_initialize failed: 2
Nov 23 18:01:25 SRV-PVE-01 pmxcfs[2463944]: [dcdb] crit: cpg_initialize failed: 2
 
I created a cluster where pve01 had the wrong peer address so I went into /etc/hosts and changed the address to connect with the correct address

please can you check if in /etc/corosync/corosync.conf the addresses look like the ones you want to have?
 
I think your PVE-02 never joined the cluster. Do you have any VMs/CTs currently running on the PVE-02?

Web UI is inaccessible for PVE-01, PVE-02 or both?
 
I have access to the PVE-02 WEB, PVE-02 is not really in the cluster
I can't get past logging into PVE-01 even though the password is correct
 
Is there any way for me to get the data from the cluster in PVE-01 via CLI to add the second node in the cluster?
 
Is there any way for me to get the data from the cluster in PVE-01 via CLI to add the second node in the cluster?

Can you SSH into the PVE-01? Do you have any VMs/CTs you want to preserve from the existing nodes? Or is this two empty nodes we are talking?
 
I have ssh access, there are some vms in pv1 that I would like to preserve
pve2 is empty
 
Last edited:
I have ssh access, there are some vms in pv1 that I would like to preserve
pve2 is empty

Via SSH, on the pve1, if you could run pvecm expected 1 and reload the GUI that might be a place to start for you.

You might even undo the clustering (which never got finished on the other node) by following the "separate a node without reinstalling" here:
https://pve.proxmox.com/wiki/Cluster_Manager

As you will read there, it's NOT recommended, but you want to have that node accessible for e.g. taking out the VMs.

You may want to back them up off the cluster and redo the whole thing again, them pull them in from backups - BEST WAY to do this.

Or you may just want to hack it, but can of worms awaiting you then (although it's a learning experience).

If you want to hack it (by creating a cluster again with that "dirty" node pve1), there will be ill side effects especially if something remained stored somewhere. So you definitely would want to avoid re-using any NAME or IP ADDRESS that you previously used).

After separating your node pve1 from cluster, it will act like it was standalone, I would:

1) Initiate creating cluster from pve2, i.e. let pve1 to join.
2) Migrate everything I need from pve1 to pve2
3) Completely reinstall (what used to be) pve1 and name it pve3 and put it on a new IP address - it will be standalone after fresh install.
4) Remove the record of (by then dead) pve1 from cluster as remaining on pve2 according to:
https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node
5) Join the pve3 into cluster with pve2.
6) Go to read about why clusters should have odd number of nodes or at least a Q device instead:
https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support

Note: If during working with pve2 you cannot get an operation done (e.g. delete record of dead node) you may need to run pvecm expected 1 to have it do what you need it to do before you regain quorum.
 
Last edited:
After studying about almost all the operation of the cluster I was able to solve it, it was a seemingly simple thing, but let me have more experience, the Corosync service had not generated the authkey, everything was corrected with the Corosync-keygen and restarting the service, I am very grateful to those who made a little of their time available to try to help me, without you I would not know where to start!
 
  • Like
Reactions: tempacc375924

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!