Hello,
on our new 3-node cluster with fresh ceph-cluster installation we get continously this messages on all 3 nodes in ceph-log.
pveversion: pve-manager/5.4-4/97a96833 (running kernel: 4.15.18-12-pve)
the cluster contains this hosts:
pve-hp-01 (7 OSDs)
pve-hp-02 (7 OSDs)
pve-hp-03 (8 OSDs)
but the message says allways ...pve-hp-01... ?
any suggestions?
2019-04-26 15:54:12.295491 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7453 : cluster [DBG] pgmap v7509: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:14.315197 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7454 : cluster [DBG] pgmap v7510: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:16.335828 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7455 : cluster [DBG] pgmap v7511: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:18.355260 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7456 : cluster [DBG] pgmap v7512: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:20.375540 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7457 : cluster [DBG] pgmap v7513: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:22.395502 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7458 : cluster [DBG] pgmap v7514: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:24.415556 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7459 : cluster [DBG] pgmap v7515: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
on our new 3-node cluster with fresh ceph-cluster installation we get continously this messages on all 3 nodes in ceph-log.
pveversion: pve-manager/5.4-4/97a96833 (running kernel: 4.15.18-12-pve)
the cluster contains this hosts:
pve-hp-01 (7 OSDs)
pve-hp-02 (7 OSDs)
pve-hp-03 (8 OSDs)
but the message says allways ...pve-hp-01... ?
any suggestions?
2019-04-26 15:54:12.295491 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7453 : cluster [DBG] pgmap v7509: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:14.315197 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7454 : cluster [DBG] pgmap v7510: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:16.335828 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7455 : cluster [DBG] pgmap v7511: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:18.355260 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7456 : cluster [DBG] pgmap v7512: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:20.375540 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7457 : cluster [DBG] pgmap v7513: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:22.395502 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7458 : cluster [DBG] pgmap v7514: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail
2019-04-26 15:54:24.415556 mgr.pve-hp-01 client.4102 172.15.0.91:0/3296207206 7459 : cluster [DBG] pgmap v7515: 1024 pgs: 1024 active+clean; 0B data, 22.2GiB used, 19.2TiB / 19.2TiB avail