Cluster lost quorum

TechLineX

Active Member
Mar 2, 2015
213
5
38
Code:
root@host1:~# service pve-cluster status
● pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled)
   Active: failed (Result: timeout) since Tue 2017-08-08 07:50:11 CEST; 20s ago
  Process: 1798 ExecStart=/usr/bin/pmxcfs $DAEMON_OPTS (code=exited, status=255)
 Main PID: 8204


root@host2:~# service pve-cluster status
● pve-cluster.service - The Proxmox VE cluster filesystem
   Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled)
   Active: failed (Result: signal) since Tue 2017-08-08 07:49:41 CEST; 2min 20s ago
  Process: 17759 ExecStart=/usr/bin/pmxcfs $DAEMON_OPTS (code=exited, status=0/SUCCESS)
 Main PID: 17780 (code=killed, signal=KILL)

Aug 08 07:48:53 host2 pmxcfs[17780]: [status] notice: members: 1/8204, 3/17780
Aug 08 07:49:05 host2 pmxcfs[17780]: [dcdb] notice: members: 1/8204, 2/14166, 3/17780
Aug 08 07:49:05 host2 pmxcfs[17780]: [dcdb] notice: queue not emtpy - resening 4 messages
Aug 08 07:49:05 host2 pmxcfs[17780]: [status] notice: members: 1/8204, 2/14166, 3/17780
Aug 08 07:49:05 host2 pmxcfs[17780]: [status] notice: queue not emtpy - resening 518 messages
Aug 08 07:49:30 host2 systemd[1]: pve-cluster.service start-post operation timed out. Stopping.
Aug 08 07:49:41 host2 systemd[1]: pve-cluster.service stop-sigterm timed out. Killing.
Aug 08 07:49:41 host2 systemd[1]: pve-cluster.service: main process exited, code=killed, status=9/KILL
Aug 08 07:49:41 host2 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Aug 08 07:49:41 host2 systemd[1]: Unit pve-cluster.service entered failed state.
root@host2:~#

Already tried to restart the pve-cluster service

Code:
root@host1:~# pveversion -v
proxmox-ve: 4.4-92 (running kernel: 4.4.44-1-pve)
pve-manager: 4.4-15 (running version: 4.4-15/7599e35a)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.67-1-pve: 4.4.67-92
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-52
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-95
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-101
pve-firewall: 2.0-33
pve-ha-manager: 1.0-41
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80
root@host11:~#

pvecm status says clusterconfig not mounted

What is to do?

Regards
 
Please tell us more about the general setup. What does syslog print out?
 
Code:
Aug  8 06:25:55 host11 kernel: [12118731.567925]  [<ffffffff811a6ee4>] try_to_free_mem_cgroup_pages+0xc4/0x1a0
Aug  8 06:25:55 host11 kernel: [12118731.581097]  [<ffffffff81258acf>] ? ep_poll+0x20f/0x3f0
Aug  8 06:25:55 host11 rsyslogd0: action 'action 17' resumed (module 'builtin:ompipe') [try http://www.rsyslog.com/e/0 ]
Aug  8 06:25:55 host11 rsyslogd-2359: action 'action 17' resumed (module 'builtin:ompipe') [try http://www.rsyslog.com/e/2359 ]
Aug  8 06:26:51 host11 kernel: [12118787.556626] Hardware name: Supermicro Super Server/X10SRL-F, BIOS 2.0 12/17/2015
Aug  8 06:26:51 host11 kernel: [12118787.571310]  0000000000000000 ffff88207fff9000 ffff880066f1c800 ffff8808407038d0
Aug  8 06:26:51 host11 kernel: [12118787.585831]  [<ffffffff81191460>] filemap_fault+0x360/0x3e0
Aug  8 06:27:19 host11 kernel: [12118815.565482] RBP: ffff880840703678 R08: 0000000000000000 R09: 0000000000000000
Aug  8 06:27:19 host11 kernel: [12118815.579901]  [<ffffffff811fee1f>] ? mem_cgroup_iter+0x1cf/0x380
Aug  8 06:27:19 host11 kernel: [12118815.593982]  [<ffffffff811be060>] __do_fault+0x50/0xe0
Aug  8 06:27:47 host11 kernel: [12118843.567683] RBP: ffff880840703850 R08: ffff8803b235b000 R09: ffff88207fff97c0
Aug  8 06:27:47 host11 kernel: [12118843.581403]  [<ffffffff811a69ed>] do_try_to_free_pages+0x17d/0x430
Aug  8 06:27:47 host11 kernel: [12118843.595000]  [<ffffffff811c2c33>] handle_mm_fault+0x10f3/0x19c0
Aug  8 06:28:15 host11 kernel: [12118871.576489] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Aug  8 06:28:15 host11 kernel: [12118871.590804]  [<ffffffff811e25c2>] ? alloc_pages_current+0x92/0x120
Aug  8 06:28:30 host11 kernel: [12118887.138075]  [<ffffffff8114067c>] ? acct_account_cputime+0x1c/0x20
Aug  8 06:28:30 host11 kernel: [12118887.149656]  <EOI>  [<ffffffff811a6739>] ? shrink_zone+0x199/0x2d0
Aug  8 06:28:30 host11 kernel: [12118887.159754]  [<ffffffff8119dc47>] __do_page_cache_readahead+0x197/0x230
Aug  8 06:28:55 host11 kernel: [12118911.573043] RAX: ffff8800602b7c00 RBX: ffff8800602b7c00 RCX: ffff8800602b7c60
Aug  8 06:28:55 host11 kernel: [12118911.585884]  [<ffffffff811a69ed>] do_try_to_free_pages+0x17d/0x430
Aug  8 06:28:55 host11 kernel: [12118911.599391]  [<ffffffff811c2c33>] handle_mm_fault+0x10f3/0x19c0
Aug  8 06:29:23 host11 kernel: [12118939.579205] RBP: ffff880840703850 R08: 0000000000000000 R09: 0000000000000000
Aug  8 06:29:23 host11 kernel: [12118939.593179]  [<ffffffff8120266c>] mem_cgroup_try_charge+0x9c/0x1b0
Aug  8 06:29:23 host11 kernel: [12118939.607025]  [<ffffffff81862478>] page_fault+0x28/0x30
Aug  8 06:29:51 host11 kernel: [12118967.583596] R10: 00000000004028c1 R11: 0000000000000333 R12: 0000000000000001

syslog is full of this
Gui is not reachable, but pveproxy is running
pve-cluster isnt running
3 node cluster
 
Is this happening on all nodes in the cluster or on specific ones? Do you have enough resources (eg. RAM) on those systems available?
 
Please send the complete syslog and journal, to not only see the trace but also why it ran into.
 
While the log doesn't show anything about the pve-cluster.service, one of your lxc containers was killed, due to OOM on the pve host. This might have lead to the service running into a timeout.
 
If there is enough RAM and the system is responding, then a restart of the service might bring back the cluster filesystem. If that fails a reboot might be the fastest option. Try with one host first and check if quorum is established.
 
Reboot of the server solved ist. About 28h later, the same issue.

Code:
Aug  9 19:36:01 host11 kernel: [101237.760755]  [<ffffffff81201238>] mem_cgroup_out_of_memory+0x2a8/0x2f0
Aug  9 19:36:01 host11 kernel: [101238.074809] [ 3309]   110  3309    18180    12332      35       3        0             0 setiathome_8.00
Aug  9 19:36:01 host11 kernel: [101238.080669] [ 6691]   110  6691    30308    10742      40       3        0             0 setiathome_8.00
Aug  9 19:36:01 host11 kernel: [101238.110171]  [<ffffffff8139381a>] ? apparmor_capable+0x1aa/0x1b0
Aug  9 19:36:01 host11 kernel: [101238.130853] [29334]     0 29334   181013     6547      60       4        0             0 accounts-daemon
Aug  9 19:36:02 host11 kernel: [101239.033657]  [<ffffffff81201238>] mem_cgroup_out_of_memory+0x2a8/0x2f0
Aug  9 19:36:03 host11 kernel: [101239.369425] [ 6692]   110  6692    10849     9483      24       3        0             0 setiathome_8.00
Aug  9 19:36:03 host11 kernel: [101239.373992] Memory cgroup out of memory: Kill process 6691 (setiathome_8.00) score 104 or sacrifice child
Aug  9 19:36:03 host11 kernel: [101239.714380]  [<ffffffff811929c5>] oom_kill_process+0x205/0x3c0
Aug  9 19:36:03 host11 kernel: [101240.046993] [29330]   104 29330    64099      107      27       3        0             0 rsyslogd
Aug  9 19:36:03 host11 kernel: [101240.051946] [29484]     0 29484     3211       34      12       3        0             0 agetty
Aug  9 19:36:03 host11 kernel: [101240.061153] [ 6556]   110  6556    10849     9508      24       3        0             0 setiathome_8.00
Aug  9 19:36:05 host11 kernel: [101242.051182]  [<ffffffff811929c5>] oom_kill_process+0x205/0x3c0
Aug  9 19:36:05 host11 kernel: [101242.071047] [29335]     0 29335     6517       67      18       3        0             0 cron
Aug  9 19:36:05 host11 kernel: [101242.079097] [29368]   110 29368    23300    12300      40       3        0             0 setiathome_8.00
Aug  9 19:36:06 host11 dhcpd: DHCPREQUEST for 80.252.107.176 from 00:5f:48:7b:59:be via vmbr0
Aug  9 19:36:06 host11 dhcpd: DHCPACK on 80.252.107.176 to 00:5f:48:7b:59:be via vmbr0
Aug  9 19:36:09 host11 kernel: [101245.798360]  [<ffffffff811930c4>] pagefault_out_of_memory+0x44/0xc0
Aug  9 19:36:09 host11 kernel: [101245.863013] [29493]   110 29493     1127       20       8       3        0             0 sh
Aug  9 19:36:09 host11 kernel: [101245.871177] [ 6692]   110  6692    10849     9507      24       3        0             0 setiathome_8.00
Aug  9 19:36:09 host11 kernel: [101246.064322]  0000000000000000 ffff88116b36cb40 ffff881125510000 ffff880abf823ce8
Aug  9 19:36:09 host11 kernel: [101246.070081]  [<ffffffff811ff26f>] ? mem_cgroup_iter+0x1cf/0x380
Aug  9 19:36:09 host11 kernel: [101246.073411]  [<ffffffff8106b7c2>] do_page_fault+0x22/0x30
Aug  9 19:36:10 host11 dhcpd: DHCPREQUEST for 80.252.107.167 from 00:b9:0e:61:bc:17 via vmbr0
Aug  9 19:36:10 host11 dhcpd: DHCPACK on 80.252.107.167 to 00:b9:0e:61:bc:17 via vmbr0
Aug  9 19:36:11 host11 dhcpd: DHCPDISCOVER from 00:c2:bc:35:31:5e via vmbr0: network irie: no free leases
Aug  9 19:36:11 host11 kernel: [101248.078952]  [<ffffffff81201fd7>] mem_cgroup_oom_synchronize+0x347/0x360
Aug  9 19:36:11 host11 kernel: [101248.138999] [29852]   106 29852    16881      113      25       3        0             0 qmgr
Aug  9 19:36:12 host11 kernel: [101248.546527] Call Trace:
Aug  9 19:36:12 host11 pvedaemon[14186]: <root@pam> successful auth for user 'root@pam'
Aug  9 19:36:12 host11 kernel: [101248.709418] Task in /lxc/293/ns killed as a result of limit of /lxc/293

I can see this about a few times:
Aug 9 19:38:50 host11 kernel: [101406.860526] Task in /lxc/293/ns killed as a result of limit of /lxc/293

Also there was enough ram, see in the screenshot
 

Attachments

  • usage.JPG
    usage.JPG
    122.1 KB · Views: 13
btu that is not an error - it just says that one of your containers used too much memory and a process in it was OOM-killed (just like the kernel would kill one of the process running on the host if the whole host is running out of memory)..
 
Ok. But what is the cause that the whole Host is stucks and didn't respond?

I have to hard Reboot the whole Host. Ssh is also not possible
 
without a full log this is hard to say, your initial post contained
Code:
Aug 08 07:49:30 host2 systemd[1]: pve-cluster.service start-post operation timed out. Stopping.
Aug 08 07:49:41 host2 systemd[1]: pve-cluster.service stop-sigterm timed out. Killing.
Aug 08 07:49:41 host2 systemd[1]: pve-cluster.service: main process exited, code=killed, status=9/KILL
Aug 08 07:49:41 host2 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Aug 08 07:49:41 host2 systemd[1]: Unit pve-cluster.service entered failed state.

which indicates that the 'pvecm updatecerts' command executed after starting the cluster file system already hung, leading to the cluster file system process being forcibly killed. if you get the same results/logs after a reboot, I would suggest starting the cluster file system in debug mode:
Code:
# to make sure it is stopped
systemctl stop pve-cluster
# run in foreground mode with debug logs
pmxcfs -f -d

and see what it says. if everything looks okay, you can try to run "pvecm updatecerts" (in a second shell, since pmxcfs is still running in your first!). if that also works as expected, you can switch back to starting the pmxcfs via systemd by pressing ctrl+c in the shell where pmxcfs is running, followed by "systemctl start pve-cluster".

I would also suggest enabling the persistent journal ("mkdir /var/log/journal; systemctl restart systemd-journald") to get better logs in the future (if you have done this already, please provide the logs from the journal!).
 
I can see a lot of these errors:

Code:
Aug 16 15:20:21 host11 kernel: [575868.756437] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-9684.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:21 host11 kernel: [575869.167632] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-18552.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:21 host11 kernel: [575869.520919] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-24843.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:21 host11 kernel: [575869.621075] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-26691.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:22 host11 kernel: [575869.882419] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-31510.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:22 host11 kernel: [575870.139884] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-36076.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:22 host11 kernel: [575870.307643] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-38813.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:22 host11 dhcpd: DHCPDISCOVER from 00:c2:bc:35:31:5e via vmbr0: network irie: no free leases
Aug 16 15:20:23 host11 kernel: [575870.799322] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-46017.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:23 host11 dhcpd: DHCPREQUEST for 80.252.107.184 from 00:b2:a5:7d:6a:7c via vmbr0
Aug 16 15:20:23 host11 dhcpd: DHCPACK on 80.252.107.184 to 00:b2:a5:7d:6a:7c via vmbr0
Aug 16 15:20:23 host11 kernel: [575871.072068] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-51190.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:23 host11 pvedaemon[26027]: <root@pam> successful auth for user 'root@pam'
Aug 16 15:20:23 host11 kernel: [575871.308121] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-53881.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:23 host11 kernel: [575871.344406] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-54344.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:23 host11 kernel: [575871.435882] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-67629.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
Aug 16 15:20:23 host11 kernel: [575871.675801] Memory cgroup stats for /lxc/415/ns/user.slice/user-0.slice/session-73447.scope: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB

Same again. Until 15:30pm no pvestatd anymore. Do you really think it solve the problem by updateing the certs?
syslog attached
 

Attachments

  • syslog.txt
    661.6 KB · Views: 1
those are not errors, but just a container using more RAM than you allowed it to use.

the only errors I see in your syslog is a corosync transmission problem (once) and right before the reset, a CPU soft lockup (which was probably the cause).
 
I thought it could be a hardware issue and migrated a lot of the lxc-containers. Same problem on the new host. Host is pingable but a login is not possible. After typing in the password the ssh login timed out.
Rebooting solved the problem. Is it a problem to use about 80 containers at one host? I can't get the problem of this issue.
 
Today I had to power cycle a server. When the server was stopping, I got this in the syslog:

Code:
Sep  4 11:32:49 host systemd[1]: Starting Synchronise Hardware Clock to System Clock...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping 239.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 239.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 126.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 126.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 277.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 277.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 100.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 100.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 124.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 124.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 111.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 111.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 274.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 274.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 180.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 180.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 118.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 118.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 112.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 112.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 371.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 371.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 367.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 367.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 296.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 296.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 265.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 265.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 193.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 193.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 168.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 168.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 117.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 117.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 298.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 298.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 417.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 417.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 369.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 369.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 191.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 191.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 189.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 189.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 159.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 159.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 119.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 119.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 114.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 114.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 109.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 109.scope.
Sep  4 11:32:49 host systemd[1]: Stopping 107.scope.
Sep  4 11:32:49 host systemd[1]: Stopped 107.scope.
Sep  4 11:32:49 host systemd[1]: Stopping qemu.slice.
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Removed slice qemu.slice.
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 355...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 354...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 349...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 346...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 269...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 255...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 245...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 243...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 240...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 238...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 236...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 235...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 231...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 222...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 219...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 210...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 201...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 137...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 136...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 127...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 123...
Sep  4 11:32:49 host systemd[1]: inotify_init1() failed: Too many open files
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 122...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 116...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 203...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 200...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 187...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 182...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 165...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 143...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 130...
Sep  4 11:32:49 host systemd[1]: Stopping LXC Container: 102...
Sep  4 11:32:49 host systemd[1]: Stopping Mail Transport Agent.
Sep  4 11:32:49 host systemd[1]: Stopped target Mail Transport Agent.
Sep  4 11:32:49 host systemd[1]: Stopping Graphical Interface.
Sep  4 11:32:49 host systemd[1]: Stopped target Graphical Interface.
Sep  4 11:32:49 host systemd[1]: Stopping Multi-User System.
Sep  4 11:32:49 host systemd[1]: Stopped target Multi-User System.
Sep  4 11:32:49 host systemd[1]: Stopping PVE VM Manager...
Sep  4 11:32:49 host systemd[1]: Stopping ZFS startup target.
Sep  4 11:32:49 host systemd[1]: Stopped target ZFS startup target.
Sep  4 11:32:49 host systemd[1]: Stopping ZFS file system shares...
Sep  4 11:32:49 host systemd[1]: Stopped ZFS file system shares.
Sep  4 11:32:49 host systemd[1]: Stopping Deferred execution scheduler...
Sep  4 11:32:49 host systemd[1]: Stopping Regular background program processing daemon...
Sep  4 11:32:49 host systemd[1]: Stopping Kernel Samepage Merging (KSM) Tuning Daemon...
Sep  4 11:32:49 host systemd[1]: Stopping Self Monitoring and Reporting Technology (SMART) Daemon...
Sep  4 11:32:49 host systemd[1]: Stopping OpenBSD Secure Shell server...
Sep  4 11:32:49 host systemd[1]: Stopping Login Prompts.
Sep  4 11:32:49 host systemd[1]: Stopped target Login Prompts.
Sep  4 11:32:49 host systemd[1]: Stopping Getty on tty1...
Sep  4 11:32:49 host systemd[1]: Stopping Login Service...
Sep  4 11:32:49 host systemd[1]: Stopping D-Bus System Message Bus...
Sep  4 11:32:49 host systemd[1]: Stopping LSB: DHCP server...
Sep  4 11:32:49 host systemd[1]: Stopping LSB: Kernel NFS server support...
Sep  4 11:32:49 host systemd[1]: Stopping LSB: Postfix Mail Transport Agent...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!