Weird messages in syslog

hidalgo

Well-Known Member
Nov 11, 2016
60
0
46
57
After repairing and expanding my ceph cluster, I found some weird messages I couldn’t find any information about. What do they mean?

Code:
Apr 02 11:34:00 pve1 systemd[1]: Started Proxmox VE replication runner.
Apr 02 11:34:50 pve1 ceph-osd[2485790]: 2018-04-02 11:34:50.907319 7fb084c55700 -1 osd.5 pg_epoch: 17720 pg[2.14e( v 17720'141206 (17648'139703,17720'141206] local-lis/les=17480/17481 n=962 ec=15056/14564 lis/c 17480/17480 les/c/f 17481/17483/15039 17480/17480/17480) [5,6,1] r=0 lpr=17480 crt=17720'141206 lcod 17720'141205 mlcod 17720'141205 active+clean+scrubbing] _scan_snaps no head for 2:72899d32:::rbd_data.202d8774b0dc51.0000000000027800:14c (have MIN)
Apr 02 11:34:50 pve1 ceph-osd[30191]: 2018-04-02 11:34:50.910744 7f583d235700 -1 osd.1 pg_epoch: 17720 pg[2.14e( v 17720'141206 (17648'139703,17720'141206] local-lis/les=17480/17481 n=962 ec=15056/14564 lis/c 17480/17480 les/c/f 17481/17483/15039 17480/17480/17480) [5,6,1] r=2 lpr=17480 luod=0'0 crt=17720'141206 lcod 17720'141205 active] _scan_snaps no head for 2:72899d32:::rbd_data.202d8774b0dc51.0000000000027800:14c (have MIN)
Apr 02 11:34:50 pve1 ceph-osd[124135]: 2018-04-02 11:34:50.910938 7fd9c8006700 -1 osd.6 pg_epoch: 17720 pg[2.14e( v 17720'141206 (17648'139703,17720'141206] local-lis/les=17480/17481 n=962 ec=15056/14564 lis/c 17480/17480 les/c/f 17481/17483/15039 17480/17480/17480) [5,6,1] r=1 lpr=17480 luod=0'0 crt=17720'141206 lcod 17720'141205 active] _scan_snaps no head for 2:72899d32:::rbd_data.202d8774b0dc51.0000000000027800:14c (have MIN)
Apr 02 11:35:00 pve1 systemd[1]: Starting Proxmox VE replication runner...
Apr 02 11:35:00 pve1 systemd[1]: Started Proxmox VE replication runner.
Apr 02 11:36:00 pve1 systemd[1]: Starting Proxmox VE replication runner...
Apr 02 11:36:00 pve1 systemd[1]: Started Proxmox VE replication runner.

Apr 02 11:59:00 pve1 systemd[1]: Started Proxmox VE replication runner.
Apr 02 11:59:31 pve1 ceph-osd[189861]: 2018-04-02 11:59:31.065890 7fd4579cc700 -1 osd.2 pg_epoch: 17720 pg[2.144( v 17720'32882 (17648'31337,17720'32882] local-lis/les=16654/16655 n=980 ec=15056/14564 lis/c 16654/16654 les/c/f 16655/16655/15039 16654/16654/16654) [2,6,4] r=0 lpr=16654 crt=17720'32882 lcod 17720'32881 mlcod 17720'32881 active+clean+scrubbing] _scan_snaps no head for 2:228749cb:::rbd_data.1ff47f74b0dc51.0000000000000651:14a (have MIN)
Apr 02 11:59:31 pve1 ceph-osd[11455]: 2018-04-02 11:59:31.068504 7f3d0468e700 -1 osd.4 pg_epoch: 17720 pg[2.144( v 17720'32882 (17648'31337,17720'32882] local-lis/les=16654/16655 n=980 ec=15056/14564 lis/c 16654/16654 les/c/f 16655/16655/15039 16654/16654/16654) [2,6,4] r=2 lpr=16654 luod=0'0 crt=17720'32882 lcod 17720'32881 active] _scan_snaps no head for 2:228749cb:::rbd_data.1ff47f74b0dc51.0000000000000651:14a (have MIN)
Apr 02 11:59:31 pve1 ceph-osd[124135]: 2018-04-02 11:59:31.069351 7fd9c8006700 -1 osd.6 pg_epoch: 17720 pg[2.144( v 17720'32882 (17648'31337,17720'32882] local-lis/les=16654/16655 n=980 ec=15056/14564 lis/c 16654/16654 les/c/f 16655/16655/15039 16654/16654/16654) [2,6,4] r=1 lpr=16654 luod=0'0 crt=17720'32882 lcod 17720'32881 active] _scan_snaps no head for 2:228749cb:::rbd_data.1ff47f74b0dc51.0000000000000651:14a (have MIN)
Apr 02 12:00:00 pve1 systemd[1]: Starting Proxmox VE replication runner...
Apr 02 12:00:00 pve1 systemd[1]: Started Proxmox VE replication runner.
Apr 02 12:00:39 pve1 ceph-osd[2306831]: 2018-04-02 12:00:39.310391 7fd09aec7700 -1 osd.8 pg_epoch: 17720 pg[2.1fb( v 17720'240330 (17720'238769,17720'240330] local-lis/les=17036/17037 n=1010 ec=15056/14564 lis/c 17036/17036 les/c/f 17037/17037/15039 16839/17036/17036) [8,4,2] r=0 lpr=17036 crt=17720'240330 lcod 17720'240329 mlcod 17720'240329 active+clean+scrubbing] _scan_snaps no head for 2:dfbc10a1:::rbd_data.1ff47f74b0dc51.0000000000000caf:15a (have MIN)
Apr 02 12:00:39 pve1 ceph-osd[11455]: 2018-04-02 12:00:39.313822 7f3d05690700 -1 osd.4 pg_epoch: 17720 pg[2.1fb( v 17720'240330 (17720'238769,17720'240330] local-lis/les=17036/17037 n=1010 ec=15056/14564 lis/c 17036/17036 les/c/f 17037/17037/15039 16839/17036/17036) [8,4,2] r=1 lpr=17036 luod=0'0 crt=17720'240330 lcod 17720'240329 active] _scan_snaps no head for 2:dfbc10a1:::rbd_data.1ff47f74b0dc51.0000000000000caf:15a (have MIN)
Apr 02 12:00:39 pve1 ceph-osd[189861]: 2018-04-02 12:00:39.328167 7fd4589ce700 -1 osd.2 pg_epoch: 17720 pg[2.1fb( v 17720'240330 (17720'238769,17720'240330] local-lis/les=17036/17037 n=1010 ec=15056/14564 lis/c 17036/17036 les/c/f 17037/17037/15039 16839/17036/17036) [8,4,2] r=2 lpr=17036 luod=0'0 crt=17720'240330 lcod 17720'240329 active] _scan_snaps no head for 2:dfbc10a1:::rbd_data.1ff47f74b0dc51.0000000000000caf:15a (have MIN)
Apr 02 12:01:00 pve1 systemd[1]: Starting Proxmox VE replication runner...
Apr 02 12:01:00 pve1 sys
 
Check in the ceph logs ('/var/log/ceph/'), there should be more information on it. But it looks like, as the scrubbing found some inconsistencies.
 
Here I got part of the log file
Code:
2018-04-04 22:06:56.387410 7fd4621e1700  4 rocksdb: [/home/builder/source/ceph-12.2.4/src/rocksdb/db/db_impl_write.cc:684] reusing log 14798 from recycle list
2018-04-04 22:06:56.387472 7fd4621e1700  4 rocksdb: [/home/builder/source/ceph-12.2.4/src/rocksdb/db/db_impl_write.cc:725] [default] New memtable created with log file: #14803. Immutable memtables: 0.
2018-04-04 22:06:56.398619 7fd4519c0700  4 rocksdb: (Original Log Time 2018/04/04-22:06:56.387496) [/home/builder/source/ceph-12.2.4/src/rocksdb/db/db_impl_compaction_flush.cc:1158] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots allowed 1, compaction slots scheduled 1
2018-04-04 22:06:56.398644 7fd4519c0700  4 rocksdb: [/home/builder/source/ceph-12.2.4/src/rocksdb/db/flush_job.cc:264] [default] [JOB 3339] Flushing memtable with next log file: 14803
2018-04-04 22:06:56.398654 7fd4519c0700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1522872416398649, "job": 3339, "event": "flush_started", "num_memtables": 1, "num_entries": 374209, "num_deletes": 83384, "memory_usage": 260049320}
2018-04-04 22:06:56.398656 7fd4519c0700  4 rocksdb: [/home/builder/source/ceph-12.2.4/src/rocksdb/db/flush_job.cc:293] [default] [JOB 3339] Level-0 flush table #14804: started
2018-04-04 22:06:56.523878 7fd4519c0700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1522872416523868, "cf_name": "default", "job": 3339, "event": "table_file_creation", "file_number": 14804, "file_size": 2767195, "table_properties": {"data_size": 2494621, "index_size": 38439, "filter_size": 233147, "raw_key_size": 4713219, "raw_average_key_size": 51, "raw_value_size": 1096818, "raw_average_value_size": 12, "num_data_blocks": 603, "num_entries": 90966, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "83029", "kMergeOperands": "677"}}
2018-04-04 22:06:56.523888 7fd4519c0700  4 rocksdb: [/home/builder/source/ceph-12.2.4/src/rocksdb/db/flush_job.cc:319] [default] [JOB 3339] Level-0 flush table #14804: 2767195 bytes OK
2018-04-04 22:06:56.534841 7fd4519c0700  4 rocksdb: [/home/builder/source/ceph-12.2.4/src/rocksdb/db/db_impl_files.cc:242] adding log 14800 to recycle list
2018-04-04 22:06:56.534848 7fd4519c0700  4 rocksdb: (Original Log Time 2018/04/04-22:06:56.523895) [/home/builder/source/ceph-12.2.4/src/rocksdb/db/memtable_list.cc:360] [default] Level-0 commit table #14804 started
2018-04-04 22:06:56.534851 7fd4519c0700  4 rocksdb: (Original Log Time 2018/04/04-22:06:56.534808) [/home/builder/source/ceph-12.2.4/src/rocksdb/db/memtable_list.cc:383] [default] Level-0 commit table #14804: memtable #1 done
2018-04-04 22:06:56.534853 7fd4519c0700  4 rocksdb: (Original Log Time 2018/04/04-22:06:56.534818) EVENT_LOG_v1 {"time_micros": 1522872416534814, "job": 3339, "event": "flush_finished", "lsm_state": [1, 1, 14, 0, 0, 0, 0], "immutable_memtables": 0}
2018-04-04 22:06:56.534854 7fd4519c0700  4 rocksdb: (Original Log Time 2018/04/04-22:06:56.534833) [/home/builder/source/ceph-12.2.4/src/rocksdb/db/db_impl_compaction_flush.cc:132] [default] Level summary: baselevel 1 max bytes base 268435456 files[1 1 14 0 0 0 0] max score 0.89
2018-04-04 22:06:56.534860 7fd4519c0700  4 rocksdb: [/home/builder/source/ceph-12.2.4/src/rocksdb/db/db_impl_files.cc:388] [JOB 3339] Try to delete WAL files size 252713966, prev total WAL file size 252794381, number of live WAL files 2.
2018-04-04 22:07:47.212659 7fd4579cc700  0 log_channel(cluster) log [DBG] : 2.1e9 scrub starts
2018-04-04 22:07:47.524779 7fd4579cc700 -1 osd.2 pg_epoch: 17752 pg[2.1e9( v 17752'34175 (17752'32655,17752'34175] local-lis/les=16654/16655 n=1012 ec=15056/14564 lis/c 16654/16654 les/c/f 16655/16655/15039 16654/16654/16654) [2,1,6] r=0 lpr=16654 crt=17752'34175 lcod 17752'34173 mlcod 17752'34173 active+clean+scrubbing] _scan_snaps no head for 2:97f2799f:::rbd_data.1ff47f74b0dc51.0000000000000589:14a (have MIN)
2018-04-04 22:07:47.588284 7fd4579cc700  0 log_channel(cluster) log [DBG] : 2.1e9 scrub ok
2018-04-04 22:07:48.543511 7fd4579cc700 -1 osd.2 pg_epoch: 17752 pg[2.126( v 17752'124021 (17648'122468,17752'124021] local-lis/les=16654/16655 n=1026 ec=15056/14564 lis/c 16654/16654 les/c/f 16655/16655/15039 16654/16654/15169) [1,4,2] r=2 lpr=16654 luod=0'0 crt=17752'124021 lcod 17752'124020 active] _scan_snaps no head for 2:64964eaf:::rbd_data.1fee9774b0dc51.0000000000001002:1c1 (have MIN)
2018-04-04 22:07:48.543538 7fd4579cc700 -1 osd.2 pg_epoch: 17752 pg[2.126( v 17752'124021 (17648'122468,17752'124021] local-lis/les=16654/16655 n=1026 ec=15056/14564 lis/c 16654/16654 les/c/f 16655/16655/15039 16654/16654/15169) [1,4,2] r=2 lpr=16654 luod=0'0 crt=17752'124021 lcod 17752'124020 active] _scan_snaps no head for 2:64964eaf:::rbd_data.1fee9774b0dc51.0000000000001002:158 (have MIN)
I don’t understand, what’s going on. I just see the same lines as before. What to do?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!