Proxmox rebooted whole cluster

itvietnam

Renowned Member
Aug 11, 2015
132
4
83
Hi,

I have proxmox server with 11 nodes.

root@hv117:~# pvecm status
Quorum information
------------------
Date: Fri Sep 28 17:03:49 2018
Quorum provider: corosync_votequorum
Nodes: 11
Node ID: 0x00000010
Ring ID: 4/144532
Quorate: Yes

Votequorum information
----------------------
Expected votes: 11
Highest expected: 11
Total votes: 11
Quorum: 6
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000004 1 10.10.30.159
0x00000009 1 10.10.30.160
0x0000000a 1 10.10.30.161
0x0000000b 1 10.10.30.162
0x0000000c 1 10.10.30.163
0x0000000d 1 10.10.30.164
0x0000000e 1 10.10.30.165
0x0000000f 1 10.10.30.166
0x00000010 1 10.10.30.167 (local)
0x00000001 1 10.10.30.168
0x00000002 1 10.10.30.169
root@hv117:~#

And this is our corosync config:

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: cluster-02
config_version: 34
interface {
bindnetaddr: 10.10.30.167
ringnumber: 0
}
interface {
bindnetaddr: 10.20.30.167
ringnumber: 1
}
ip_version: ipv4
rrp_mode: passive
secauth: on
version: 2
}

root@hv117:~#

when we reboot server hv117 which has IP 10.10.30.167: cluster is still up. But when this server booted into OS, all cluster rebooted.

May we know how to debug this problem?

Thanks,
 
This is full information. Do you think this problem caused by bindnetaddr is listening on IP of hv117? and when this node reboot, all cluster dump.

logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: hv109
nodeid: 4
quorum_votes: 1
ring0_addr: 10.10.30.159
ring1_addr: 10.20.30.159
}
node {
name: hv110
nodeid: 9
quorum_votes: 1
ring0_addr: 10.10.30.160
ring1_addr: 10.20.30.160
}
node {
name: hv111
nodeid: 10
quorum_votes: 1
ring0_addr: 10.10.30.161
ring1_addr: 10.20.30.161
}
node {
name: hv112
nodeid: 11
quorum_votes: 1
ring0_addr: 10.10.30.162
ring1_addr: 10.20.30.162
}
node {
name: hv113
nodeid: 12
quorum_votes: 1
ring0_addr: 10.10.30.163
ring1_addr: 10.20.30.163
}
node {
name: hv114
nodeid: 13
quorum_votes: 1
ring0_addr: 10.10.30.164
ring1_addr: 10.20.30.164
}
node {
name: hv115
nodeid: 14
quorum_votes: 1
ring0_addr: 10.10.30.165
ring1_addr: 10.20.30.165
}
node {
name: hv116
nodeid: 15
quorum_votes: 1
ring0_addr: 10.10.30.166
ring1_addr: 10.20.30.166
}
node {
name: hv117
nodeid: 16
quorum_votes: 1
ring0_addr: 10.10.30.167
ring1_addr: 10.20.30.167
}
node {
name: hv118
nodeid: 1
quorum_votes: 1
ring0_addr: 10.10.30.168
ring1_addr: 10.20.30.168
}
node {
name: hv119
nodeid: 2
quorum_votes: 1
ring0_addr: 10.10.30.169
ring1_addr: 10.20.30.169
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: just-a-name
config_version: 34
interface {
bindnetaddr: 10.10.30.167
ringnumber: 0
}
interface {
bindnetaddr: 10.20.30.167
ringnumber: 1
}
ip_version: ipv4
rrp_mode: passive
secauth: on
version: 2
}

And this is full log before whole cluster rebooted:

Sep 26 12:41:08 just-a-name-hv116 systemd[1]: Started Proxmox VE replication runner.
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [status] notice: members: 1/7303, 2/3231, 4/3449, 9/3304, 10/3261, 11/3448, 12/3066, 13/3148, 14/3029, 15/3110
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [status] notice: starting data syncronisation
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: members: 1/7303, 2/3231, 4/3449, 9/3304, 10/3261, 11/3448, 12/3066, 13/3148, 14/3029, 15/3110
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: starting data syncronisation
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: received sync request (epoch 1/7303/0000000F)
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: notice [TOTEM ] A new membership (10.10.30.159:144404) was formed. Members left: 16
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [TOTEM ] A new membership (10.10.30.159:144404) was formed. Members left: 16
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: warning [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [CPG ] downlist left_list: 1 received
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: notice [QUORUM] Members[10]: 4 9 10 11 12 13 14 15 1 2
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: notice [MAIN ] Completed service synchronization, ready to provide service.
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [QUORUM] Members[10]: 4 9 10 11 12 13 14 15 1 2
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [MAIN ] Completed service synchronization, ready to provide service.
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [status] notice: received sync request (epoch 1/7303/0000000F)
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: received all states
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: leader is 1/7303
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: synced members: 1/7303
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: waiting for updates from leader
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: dfsm_deliver_queue: queue length 14
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: notice [TOTEM ] Retransmit List: 3c4 3c6
Sep 26 12:41:09 just-a-name-hv116 corosync[10591]: [TOTEM ] Retransmit List: 3c4 3c6
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: update complete - trying to commit (got 1 inode updates)
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [database] crit: new index does not match master index - internal error
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] crit: leaving CPG group
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [status] notice: received all states
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [status] notice: all data is up to date
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: start cluster connection
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] crit: internal error - unknown mode 0
Sep 26 12:41:09 just-a-name-hv116 pmxcfs[3110]: [dcdb] crit: leaving CPG group
Sep 26 12:41:10 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:10 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:10 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:10 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:12 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:12 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:15 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: start cluster connection
Sep 26 12:41:15 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: members: 1/7303, 4/3449, 9/3304, 10/3261, 11/3448, 13/3148, 15/3110
Sep 26 12:41:15 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: starting data syncronisation
Sep 26 12:41:15 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: members: 1/7303, 4/3449, 9/3304, 10/3261, 11/3448, 13/3148
Sep 26 12:41:15 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: we (15/3110) left the process group
Sep 26 12:41:15 just-a-name-hv116 pmxcfs[3110]: [dcdb] crit: leaving CPG group
Sep 26 12:41:15 just-a-name-hv116 pve-ha-lrm[3396]: lost lock 'ha_agent_hv116_lock - cfs lock update failed - Device or resource busy
Sep 26 12:41:15 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:15 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:15 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:15 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:16 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:16 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:21 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: start cluster connection
Sep 26 12:41:21 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: members: 1/7303, 15/3110
Sep 26 12:41:21 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: starting data syncronisation
Sep 26 12:41:21 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: members: 1/7303
Sep 26 12:41:21 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: we (15/3110) left the process group
Sep 26 12:41:21 just-a-name-hv116 pmxcfs[3110]: [dcdb] crit: leaving CPG group
Sep 26 12:41:21 just-a-name-hv116 pve-ha-lrm[3396]: status change active => lost_agent_lock
Sep 26 12:41:21 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:21 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:21 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:21 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:26 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:26 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: start cluster connection
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: members: 1/7303, 15/3110
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: starting data syncronisation
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: received sync request (epoch 1/7303/0000004D)
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: received all states
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: leader is 1/7303
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: synced members: 1/7303
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: waiting for updates from leader
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: dfsm_deliver_queue: queue length 2
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: update complete - trying to commit (got 4 inode updates)
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: all data is up to date
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] notice: dfsm_deliver_queue: queue length 2
Sep 26 12:41:27 just-a-name-hv116 pmxcfs[3110]: [dcdb] crit: serious internal error - stop cluster connection
Sep 26 12:41:28 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:28 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:41:33 just-a-name-hv116 pmxcfs[3110]: [dcdb] crit: can't initialize service
Sep 26 12:42:00 just-a-name-hv116 systemd[1]: Starting Proxmox VE replication runner...
Sep 26 12:42:01 just-a-name-hv116 corosync[10591]: error [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:42:01 just-a-name-hv116 corosync[10591]: [CPG ] *** 0x55a140cd8ee0 can't mcast to group state:0, error:12
Sep 26 12:42:03 just-a-name-hv116 watchdog-mux[1299]: client watchdog expired - disable watchdog updates
Sep 26 12:45:51 just-a-name-hv116 systemd-modules-load[575]: Inserted module 'iscsi_tcp'
Sep 26 12:45:51 just-a-name-hv116 systemd-modules-load[575]: Inserted module 'ib_iser'
Sep 26 12:45:51 just-a-name-hv116 systemd-modules-load[575]: Inserted module 'vhost_net'
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Started udev Coldplug all Devices.
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Starting udev Wait for Complete Device Initialization...
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Starting Flush Journal to Persistent Storage...
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Started Flush Journal to Persistent Storage.
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Found device /dev/pve/swap.
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Activating swap /dev/pve/swap...
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] Linux version 4.15.18-2-pve (dietmar@evita) (gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)) #1 SMP PVE 4.15.18-20 (Thu, 16 Aug 2018 11:06:35 +0200) ()
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Activated swap /dev/pve/swap.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.18-2-pve root=/dev/mapper/pve-root ro quiet
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] KERNEL supported cpus:
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] Intel GenuineIntel
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Reached target Swap.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] AMD AuthenticAMD
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] Centaur CentaurHauls
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Started udev Wait for Complete Device Initialization.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Starting Activation of LVM2 logical volumes...
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Sep 26 12:45:51 just-a-name-hv116 systemd-modules-load[575]: Inserted module 'zfs'
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] e820: BIOS-provided physical RAM map:
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Started Load Kernel Modules.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009bfff] usable
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007d2effff] usable
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] BIOS-e820: [mem 0x000000007d2f0000-0x000000007d31bfff] reserved
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Starting Apply Kernel Variables...
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] BIOS-e820: [mem 0x000000007d31c000-0x000000007d35afff] ACPI data
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Mounting FUSE Control File System...
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] BIOS-e820: [mem 0x000000007d35b000-0x000000007fffffff] reserved
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Mounting Configuration File System...
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] BIOS-e820: [mem 0x00000000fe000000-0x00000000ffffffff] reserved
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000307fffffff] usable
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] NX (Execute Disable) protection: active
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] SMBIOS 2.7 present.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] DMI: Dell Inc. PowerEdge R620/0D2D5F, BIOS 1.4.8 10/25/2012
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Started Apply Kernel Variables.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] e820: last_pfn = 0x3080000 max_arch_pfn = 0x400000000
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] MTRR default type: uncachable
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] MTRR fixed ranges enabled:
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Mounted FUSE Control File System.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 00000-9FFFF write-back
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] A0000-BFFFF uncachable
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] C0000-CBFFF write-protect
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Mounted Configuration File System.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] CC000-D3FFF write-back
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] D4000-D7FFF write-protect
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] D8000-EBFFF uncachable
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Started Device-mapper event daemon.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] EC000-FFFFF write-protect
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] MTRR variable ranges enabled:
Sep 26 12:45:51 just-a-name-hv116 dmeventd[980]: dmeventd ready for processing.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 0 base 000000000000 mask 3FFF80000000 write-back
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 1 base 000100000000 mask 3FFF00000000 write-back
Sep 26 12:45:51 just-a-name-hv116 lvm[980]: Monitoring thin pool pve-data.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 2 base 000200000000 mask 3FFE00000000 write-back
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 3 base 000400000000 mask 3FFC00000000 write-back
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 4 base 000800000000 mask 3FF800000000 write-back
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 5 base 001000000000 mask 3FF000000000 write-back
Sep 26 12:45:51 just-a-name-hv116 lvm[903]: 3 logical volume(s) in volume group "pve" now active
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 6 base 002000000000 mask 3FF000000000 write-back
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 7 base 003000000000 mask 3FFF80000000 write-back
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 8 disabled
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Started Activation of LVM2 logical volumes.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] 9 disabled
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Reached target Encrypted Volumes.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] e820: update [mem 0x80000000-0xffffffff] usable ==> reserved
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] e820: last_pfn = 0x7d2f0 max_arch_pfn = 0x400000000
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Starting Activation of LVM2 logical volumes...
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] found SMP MP-table at [mem 0x000fe710-0x000fe71f] mapped at [ (ptrval)]
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Reached target ZFS pool import target.
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] Scanning 1 areas for low memory corruption
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Starting Mount ZFS filesystems...
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] Base memory trampoline at [ (ptrval)] 96000 size 24576
Sep 26 12:45:51 just-a-name-hv116 kernel: [ 0.000000] Using GB pages for direct mapping
Sep 26 12:45:51 just-a-name-hv116 systemd[1]: Started Mount ZFS filesystems.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!