Windows terminal server breaks webpages sometimes.

Dec 26, 2018
138
2
23
35
Hello.
Sorry about the title, not sure where to troubleshoot.
The symptoms the users are reporting is: "sometimes webpages only load half way"
Like if you manually cancel a web page mid-loading.
And sometimes when moving mail in outlook, everything freezes for the users for a few seconds.

The syslog and ceph log don't show anything as far as i can see

But i dont know what is going on in dmesg

2019-08-07 11:09:21.787075 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122960 : cluster [DBG] pgmap v122964: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 809KiB/s rd, 164KiB/s wr, 46op/s
2019-08-07 11:09:23.807310 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122961 : cluster [DBG] pgmap v122965: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 809KiB/s rd, 251KiB/s wr, 54op/s
2019-08-07 11:09:25.827093 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122962 : cluster [DBG] pgmap v122966: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 806KiB/s rd, 201KiB/s wr, 48op/s
2019-08-07 11:09:27.847322 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122963 : cluster [DBG] pgmap v122967: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 806KiB/s rd, 964KiB/s wr, 68op/s
2019-08-07 11:09:29.867171 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122964 : cluster [DBG] pgmap v122968: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 874KiB/s wr, 29op/s
2019-08-07 11:09:31.887080 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122965 : cluster [DBG] pgmap v122969: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 855KiB/s wr, 28op/s
2019-08-07 11:09:33.907398 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122966 : cluster [DBG] pgmap v122970: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 1.95MiB/s wr, 42op/s
2019-08-07 11:09:35.927055 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122967 : cluster [DBG] pgmap v122971: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 1.86MiB/s wr, 35op/s
2019-08-07 11:09:37.947308 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122968 : cluster [DBG] pgmap v122972: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 2.04MiB/s wr, 49op/s
2019-08-07 11:09:39.967196 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122969 : cluster [DBG] pgmap v122973: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.30MiB/s wr, 30op/s
2019-08-07 11:09:41.987192 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122970 : cluster [DBG] pgmap v122974: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.31MiB/s wr, 31op/s
2019-08-07 11:09:44.007323 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122971 : cluster [DBG] pgmap v122975: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.39MiB/s wr, 39op/s
2019-08-07 11:09:46.027107 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122972 : cluster [DBG] pgmap v122976: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 286KiB/s wr, 24op/s
2019-08-07 11:09:48.047393 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122973 : cluster [DBG] pgmap v122977: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 526KiB/s wr, 44op/s
2019-08-07 11:09:50.067148 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122974 : cluster [DBG] pgmap v122978: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 358KiB/s wr, 30op/s
2019-08-07 11:09:52.087221 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122975 : cluster [DBG] pgmap v122979: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 447KiB/s wr, 35op/s
2019-08-07 11:09:54.107396 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122976 : cluster [DBG] pgmap v122980: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 952KiB/s wr, 56op/s
2019-08-07 11:09:56.127107 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122977 : cluster [DBG] pgmap v122981: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 168B/s rd, 873KiB/s wr, 48op/s
2019-08-07 11:09:58.147289 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122978 : cluster [DBG] pgmap v122982: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.90KiB/s rd, 1.12MiB/s wr, 70op/s
2019-08-07 11:10:00.167130 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122979 : cluster [DBG] pgmap v122983: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.73KiB/s rd, 910KiB/s wr, 52op/s
2019-08-07 11:10:02.187223 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122980 : cluster [DBG] pgmap v122984: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 2.06KiB/s rd, 1.38MiB/s wr, 63op/s
2019-08-07 11:10:04.207336 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122981 : cluster [DBG] pgmap v122985: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 3.46KiB/s rd, 1.92MiB/s wr, 71op/s
2019-08-07 11:10:06.227084 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122982 : cluster [DBG] pgmap v122986: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 3.46KiB/s rd, 1.41MiB/s wr, 49op/s
2019-08-07 11:10:08.247389 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122983 : cluster [DBG] pgmap v122987: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 3.46KiB/s rd, 1.62MiB/s wr, 58op/s
2019-08-07 11:10:10.267162 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122984 : cluster [DBG] pgmap v122988: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.73KiB/s rd, 1.36MiB/s wr, 35op/s
2019-08-07 11:10:12.287284 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122985 : cluster [DBG] pgmap v122989: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.73KiB/s rd, 1.39MiB/s wr, 39op/s
2019-08-07 11:10:14.307346 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122986 : cluster [DBG] pgmap v122990: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.40KiB/s rd, 1.06MiB/s wr, 36op/s
2019-08-07 11:10:16.327067 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122987 : cluster [DBG] pgmap v122991: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 445KiB/s wr, 22op/s
2019-08-07 11:10:18.347377 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122988 : cluster [DBG] pgmap v122992: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.58MiB/s wr, 40op/s
2019-08-07 11:10:20.367079 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122989 : cluster [DBG] pgmap v122993: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.37MiB/s wr, 32op/s
2019-08-07 11:10:22.387193 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122990 : cluster [DBG] pgmap v122994: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 1.42MiB/s wr, 37op/s
2019-08-07 11:10:24.407319 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122991 : cluster [DBG] pgmap v122995: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 139KiB/s rd, 1.41MiB/s wr, 40op/s
2019-08-07 11:10:26.427102 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122992 : cluster [DBG] pgmap v122996: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 139KiB/s rd, 1.23MiB/s wr, 31op/s
2019-08-07 11:10:28.447348 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122993 : cluster [DBG] pgmap v122997: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 140KiB/s rd, 1.28MiB/s wr, 38op/s
2019-08-07 11:10:30.467144 mgr.proxmox9 client.3126949 10.10.10.19:0/331402091 122994 : cluster [DBG] pgmap v122998: 128 pgs: 128 active+clean; 425GiB data, 1.24TiB used, 1.38TiB / 2.62TiB avail; 140KiB/s rd, 142KiB/s wr, 21op/s

Aug 07 14:52:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 14:53:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 14:53:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 14:54:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 14:54:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 14:55:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 14:55:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 14:55:01 proxmox1 CRON[2975219]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 07 14:55:01 proxmox1 CRON[2975220]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 07 14:55:01 proxmox1 CRON[2975219]: pam_unix(cron:session): session closed for user root
Aug 07 14:56:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 14:56:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 14:57:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 14:57:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 14:58:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 14:58:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 14:59:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 14:59:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:00:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:00:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:01:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:01:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:02:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:02:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:03:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:03:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:04:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:04:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:05:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:05:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:05:01 proxmox1 CRON[2981890]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 07 15:05:01 proxmox1 CRON[2981891]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 07 15:05:01 proxmox1 CRON[2981890]: pam_unix(cron:session): session closed for user root
Aug 07 15:06:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:06:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:07:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:07:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:08:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:08:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:09:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:09:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:10:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:10:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:11:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:11:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:12:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:12:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:13:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:13:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:13:08 proxmox1 rrdcached[2800]: flushing old values
Aug 07 15:13:08 proxmox1 rrdcached[2800]: rotating journals
Aug 07 15:13:08 proxmox1 rrdcached[2800]: started new journal /var/lib/rrdcached/journal/rrd.journal.1565183588.098144
Aug 07 15:13:08 proxmox1 rrdcached[2800]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1565176388.098088
Aug 07 15:13:08 proxmox1 pmxcfs[2897]: [dcdb] notice: data verification successful
Aug 07 15:14:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:14:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:15:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:15:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:15:01 proxmox1 CRON[2988696]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 07 15:15:01 proxmox1 CRON[2988698]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 07 15:15:01 proxmox1 CRON[2988696]: pam_unix(cron:session): session closed for user root
Aug 07 15:16:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:16:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:17:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:17:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:17:01 proxmox1 CRON[2990053]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 07 15:17:01 proxmox1 CRON[2990054]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 07 15:17:01 proxmox1 CRON[2990053]: pam_unix(cron:session): session closed for user root
Aug 07 15:18:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:18:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:19:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:19:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:20:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:20:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:21:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:21:00 proxmox1 pmxcfs[2897]: [status] notice: received log
Aug 07 15:21:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:22:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:22:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:23:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:23:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:24:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:24:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:25:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:25:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:25:01 proxmox1 CRON[2995484]: pam_unix(cron:session): session opened for user root by (uid=0)
Aug 07 15:25:01 proxmox1 CRON[2995485]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 07 15:25:01 proxmox1 CRON[2995484]: pam_unix(cron:session): session closed for user root
Aug 07 15:26:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:26:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:27:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:27:01 proxmox1 systemd[1]: Started Proxmox VE replication runner.
Aug 07 15:28:00 proxmox1 systemd[1]: Starting Proxmox VE replication runner...
Aug 07 15:28:00 proxmox1 systemd[1]: Started Proxmox VE replication runner.

[ 2463.906615] perf: interrupt took too long (2515 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
[ 3864.693363] perf: interrupt took too long (3152 > 3143), lowering kernel.perf_event_max_sample_rate to 63250
[ 5488.503899] perf: interrupt took too long (4035 > 3940), lowering kernel.perf_event_max_sample_rate to 49500
[ 5569.623675] perf: interrupt took too long (5170 > 5043), lowering kernel.perf_event_max_sample_rate to 38500
[20125.822134] perf: interrupt took too long (6489 > 6462), lowering kernel.perf_event_max_sample_rate to 30750
[29917.990199] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[29918.026496] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[29918.030184] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[29918.030586] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[30038.002214] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[30038.003287] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[66872.312798] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[71060.991896] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[72510.127425] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[72812.739473] libceph: osd5 10.10.10.19:6804 socket closed (con state OPEN)
[73802.414432] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[74250.888731] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[74850.674399] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[76138.875462] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[76138.880348] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[76152.509025] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[79431.121839] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[79999.556831] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[81656.011902] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[82494.361968] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[84503.768570] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[85787.874566] libceph: osd5 10.10.10.19:6804 socket closed (con state OPEN)
[86672.557708] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[88745.957653] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[88745.958726] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[89488.876673] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[116258.968963] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[116258.972195] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[116258.976704] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)
[116258.997038] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[116259.000234] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)
[116429.983861] libceph: osd5 10.10.10.19:6804 socket closed (con state OPEN)
[116445.119136] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[116445.120267] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[116445.120839] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[116445.121420] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[116445.121871] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[116445.122460] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[116445.123096] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[121623.004306] libceph: read_partial_message 00000000937808ba data crc 1601366377 != exp. 162078956
[121623.004385] libceph: osd4 10.10.10.19:6800 bad crc/signature
[121623.006252] libceph: read_partial_message 000000005029439c data crc 3995689696 != exp. 2412208732
[121623.006310] libceph: osd2 10.10.10.13:6800 bad crc/signature
[121623.008117] libceph: read_partial_message 000000003af66d6f data crc 678724697 != exp. 1577218120
[121623.008159] libceph: osd3 10.10.10.13:6804 bad crc/signature
[121623.036577] libceph: read_partial_message 000000005be56aa2 data crc 1460631200 != exp. 3948037880
[121623.036580] libceph: read_partial_message 00000000db633e70 data crc 2883371036 != exp. 761823435
[121623.036584] libceph: osd3 10.10.10.13:6804 bad crc/signature
[121623.036661] libceph: osd1 10.10.10.11:6800 bad crc/signature
[152781.260827] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[152920.359260] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[154174.532403] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[154883.421261] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[155312.838364] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[157768.365376] perf: interrupt took too long (8129 > 8111), lowering kernel.perf_event_max_sample_rate to 24500
[160648.386019] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[161267.268454] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[165981.287254] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[166414.789057] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[172220.527213] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[173782.693032] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[174532.450363] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[177385.550077] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[189135.982662] libceph: read_partial_message 000000003bdb61cb data crc 1977706817 != exp. 3144601770
[189135.982734] libceph: osd2 10.10.10.13:6800 bad crc/signature
[202668.995741] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[202668.997149] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[202668.998241] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[202668.999222] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[202669.000983] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[202669.002101] libceph: osd4 10.10.10.19:6800 socket closed (con state OPEN)
[202707.991399] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)
[202707.993396] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)
[202708.008754] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)
[202708.010574] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)
[209571.285619] libceph: read_partial_message 00000000e8ea77ad data crc 1307028123 != exp. 4123262809
[209571.285656] libceph: read_partial_message 000000002635970e data crc 3866773357 != exp. 2669613396
[209571.285661] libceph: osd3 10.10.10.13:6804 bad crc/signature
[209571.285772] libceph: osd5 10.10.10.19:6804 bad crc/signature
[239560.003925] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[240358.803295] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[246644.410328] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[246937.501319] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[247508.713514] libceph: osd2 10.10.10.13:6800 socket closed (con state OPEN)
[247605.451041] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)
[247632.785818] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[252707.854263] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[254879.378087] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)
[257668.431602] libceph: osd0 10.10.10.11:6804 socket closed (con state OPEN)
[258839.402494] libceph: osd1 10.10.10.11:6800 socket closed (con state OPEN)
[259190.063400] libceph: read_partial_message 00000000ef258b84 data crc 4143046823 != exp. 792502681
[259190.063402] libceph: read_partial_message 000000002ac55c7b data crc 641085475 != exp. 1667059292
[259190.063412] libceph: osd3 10.10.10.13:6804 bad crc/signature
[259190.063497] libceph: osd2 10.10.10.13:6800 bad crc/signature
[259297.472143] libceph: osd3 10.10.10.13:6804 socket closed (con state OPEN)

Is this normal?
 
Similar messages have been reported on older software versions.

What is the output of the following commands?
Code:
pveversion -v
Code:
ceph status
 
Found this thread:
https://forum.proxmox.com/threads/ceph-bad-crc-signature-and-socket-closed.38681/

I disabled krdb in storage for ceph.
Seems to have fixed the error messages in dmesg.


root@proxmox1:/var/log/ceph# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-18-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-6
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph: 12.2.12-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-54
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-40
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

root@proxmox1:/var/log/ceph# ceph status
cluster:
id: 1f6d8776-39b3-44c6-b484-111d3c8b8372
health: HEALTH_OK
services:
mon: 3 daemons, quorum proxmox1,proxmox3,proxmox9
mgr: proxmox9(active), standbys: proxmox1, proxmox3
osd: 6 osds: 6 up, 6 in
data:
pools: 1 pools, 128 pgs
objects: 109.09k objects, 426GiB
usage: 1.24TiB used, 1.38TiB / 2.62TiB avail
pgs: 128 active+clean
io:
client: 136KiB/s wr, 0op/s rd, 11op/s wr
 
Last edited:
Hope that fixes your user's problems as a freezing Outlook could have a variety of reasons.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!