ceph 16.2 pacific cluster crash

Waschbüsch

Renowned Member
Dec 15, 2014
93
8
73
Munich
Hi all,

after an upgrade (on Friday night) to Proxmox 7.x and Ceph 16.2, everything seemed to work perfectly.
Sometime early morning today (sunday), the cluster crashed.
17 out of 24 OSDs will no longer start

most of them will do a successful
Code:
ceph-bluestore-tool fsck

but some will have an assertion (just like when trying to start them):

Code:
./src/os/bluestore/BlueFS.cc: In function 'void BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)' thread 7f95ca004240 time 2021-07-11T07:53:55.341113+0000
./src/os/bluestore/BlueFS.cc: 2340: FAILED ceph_assert(r == 0)
2021-07-11T07:53:55.337+0000 7f95ca004240 -1 bluefs _allocate allocation failed, needed 0x400000
 ceph version 16.2.4 (a912ff2c95b1f9a8e2e48509e602ee008d5c9434) pacific (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x124) [0x7f95cac1d1b6]
 2: /usr/lib/ceph/libceph-common.so.2(+0x24f341) [0x7f95cac1d341]
 3: (BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)+0x1a4d) [0x555a1c7e05ed]
 4: (BlueFS::sync_metadata(bool)+0x115) [0x555a1c7e0985]
 5: (BlueFS::umount(bool)+0x1b4) [0x555a1c7e0e34]
 6: (BlueStore::_close_bluefs(bool)+0x14) [0x555a1c7ff284]
 7: (BlueStore::_close_db_and_around(bool)+0xd) [0x555a1c8360dd]
 8: (BlueStore::_fsck(BlueStore::FSCKDepth, bool)+0x258) [0x555a1c88cfb8]
 9: main()
 10: __libc_start_main()
 11: _start()
*** Caught signal (Aborted) **
 in thread 7f95ca004240 thread_name:ceph-bluestore-
2021-07-11T07:53:55.341+0000 7f95ca004240 -1 ./src/os/bluestore/BlueFS.cc: In function 'void BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)' thread 7f95ca004240 time 2021-07-11T07:53:55.341113+0000
./src/os/bluestore/BlueFS.cc: 2340: FAILED ceph_assert(r == 0)

Some of the last non-assertion messages were that OSDs were running full, which would make sense if enough of them died to fill the rest (cluster usage was aroudn 65% - 70%, so it is conceivable).

Anyway, I have to go back to backups and will downgrade ceph to octopus again and reformat the osds (some services are critical and cannot wait for extensive recovery attempts).
However, I thought I'd let people know - maybe there still is something off with ceph 16.2.
If any of the logs I have could be useful for analysis - please let me know and I'll send them.
 
Hi,
In function 'void BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)' thread 7f95ca004240 time 2021-07-11T07:53:55.341113+0000 ./src/os/bluestore/BlueFS.cc: 2340: FAILED ceph_assert(r == 0)

Seems those went really out of space, as the value being asserted here is the return value of _allocate, which only returns a non-zero value for ENOSPC.

As there, IIRC are the RocksDB data stored there directly it may not mean that the data-space usage of ceph means a lot here.

A few questions:
  1. This is the log of the fsck run after the crash only, do you have any logs from (around) the crash itself?
  2. How is (or was) the layout of OSDs here?
    1. Separate Journal
    2. Separate DB?
  3. How much data are we talking, lots of snapshots, ...?
 
A few questions:
  1. This is the log of the fsck run after the crash only, do you have any logs from (around) the crash itself?
  2. How is (or was) the layout of OSDs here?
    1. Separate Journal
    2. Separate DB?
  3. How much data are we talking, lots of snapshots, ...?
24 1TB SSDs, 8 per Node, everything was on the osd itself.
No snapshots, usage at roughly 65% / 70%.
One problem seems to have been a backup going to a cephfs share that (for reasons I have yet t ounderstand) was way bigger than previous days and seems to have caused the out of space issue.
Log is too big to attach here, can I send that per mail?
 
After having restored everything from backup, I looked through the fsck attempts and saw loads of these:

2021-07-11T07:07:56.463+0000 7f1db9dfd240 -1 bluestore(/var/lib/ceph/osd/ceph-20) fsck warning: #3:ed18366f:::rbd_header.46a0c2c6d8b10c:head# has omap that is not per-pg or pgmeta

I cannot recall having skipped any steps on this (or previous) upgrades of Ceph, but my guess is that this means the OSD was not working according to the latest and greatest bluestore feature-set?
I had no Health warning, though, so I assume that not all OSDs can have been like this.
I would have expected

BLUESTORE_NO_PER_POOL_OMAP

or

BLUESTORE_NO_PER_PG_OMAP

as a health warning otherwise?
 
Last edited:
For what its worth, I had a similar issue which presented itself with;

Code:
Jul 15 00:28:53 sh-prox02 systemd[1]: ceph-osd@2.service: Scheduled restart job, restart counter is at 6.
Jul 15 00:28:53 sh-prox02 systemd[1]: Stopped Ceph object storage daemon osd.2.
Jul 15 00:28:53 sh-prox02 systemd[1]: ceph-osd@2.service: Consumed 14.647s CPU time.
Jul 15 00:28:53 sh-prox02 systemd[1]: ceph-osd@2.service: Start request repeated too quickly.
Jul 15 00:28:53 sh-prox02 systemd[1]: ceph-osd@2.service: Failed with result 'signal'.
Jul 15 00:28:53 sh-prox02 systemd[1]: Failed to start Ceph object storage daemon osd.2.


Digging through the logs revealed;
Code:
bluefs _allocate allocation failed, needed 0x400000

Looking at this bugreport there were some comments stating that changing bluestore_allocator from hybrid over to bitmap seem to triage the issue.
For my part I've set bluestore_allocator to bitmap on my pure nvme osds which allowed the osds to start.

https://tracker.ceph.com/issues/50656

EDIT: I was presented with this issue some time after upgrading from ceph octopus to pacific after a recent upgrade to pve7 from 6.4.
Weirdly enough it seemed only to affect NVME specific drives. All other drives (spinners (with db/wal on nvme lvm) and ghetto ssds) have started fine and seem to be operational.
 
Last edited:
Looking at this bugreport there were some comments stating that changing bluestore_allocator from hybrid over to bitmap seem to triage the issue.
For my part I've set bluestore_allocator to bitmap on my pure nvme osds which allowed the osds to start.

https://tracker.ceph.com/issues/50656

EDIT: I was presented with this issue some time after upgrading from ceph octopus to pacific after a recent upgrade to pve7 from 6.4.
Weirdly enough it seemed only to affect NVME specific drives. All other drives (spinners (with db/wal on nvme lvm) and ghetto ssds) have started fine and seem to be operational.
Interesting. Thank you for sharing this!
Since I reformatted after downgrading to octopus, I guess I will have to keep the information about the allocator handy for when / if I upgrade to pacific again.
My OSDs all were enterprise / DC class SSDs, albeit SATA ones (Samsung SM863, Samsung PM883, Intel S3520). And it hit me two days after upgrade.
It is hard to pinpoint now, obviously, but from what I saw, my suspicion is that the OSDs that ended up failing were ones I had originally added when the cluster was running nautilus.
Sometime after upgrading to octopus, I had added 6 OSDs and might have reformatted one (or it never got to the point were it was corrupted), which would fit perfectly with the 7 out of 24 still running and everything else not coming back up...
 
  • Like
Reactions: Pakillo77
Oh and, since that might have come across the wrong way: I do not suggest that it has to do with upgrading from an older ceph version per se, but rather that the OSDs affected had been the longest-running and seen the most reorganizing, reshuffling, etc.
 
In my specific case it didnt seem to matter if the OSD was previously created on Nautilus or Octopus.

In my specific case it was only my pure Nvme (Intell P3500 u.2/pcie) drives which failed. These were created while on 15.2.4. I could try and delete and recreate it. Considering it seems to be a corner case bug it might be worthwhile trying out. I have no idea what bluestore_allocation = bitmap actually does but considering its not the default setting im not too eager on using it
 
In my specific case it didnt seem to matter if the OSD was previously created on Nautilus or Octopus.

In my specific case it was only my pure Nvme (Intell P3500 u.2/pcie) drives which failed. These were created while on 15.2.4. I could try and delete and recreate it. Considering it seems to be a corner case bug it might be worthwhile trying out. I have no idea what bluestore_allocation = bitmap actually does but considering its not the default setting im not too eager on using it
bitmap allocator was the default allocator for octopus. (or nautilus I don't remember exactly).
so it's fine to use it.

hybrid allocator was introduced for better performance on hdd disk. (bitmap allocator is fast on ssd, but a little bit slower on hdd).

basicaly, the allocator is doing "where to write the object without having too much fragmentation"
 
do you have any crash dump to have more details ?
(#ceph crash ls, ceph crash info <id>)

Sure, you can check out an example here. This is from one of the NVMe drives which kept failing until I changed the bluestore allocator function (thanks alot for explaining that part)

Code:
root@sh-prox04:~# ceph crash info 2021-07-14T23:35:41.251654Z_7f7bd234-3dbe-4b33-a769-49a8d0c1928d
{
    "assert_condition": "r == 0",
    "assert_file": "./src/os/bluestore/BlueFS.cc",
    "assert_func": "void BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)",
    "assert_line": 2340,
    "assert_msg": "./src/os/bluestore/BlueFS.cc: In function 'void BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)' thread 7f797dcd1f00 time 2021-07-15T01:35:41.196562+0200\n./src/os/bluestore/BlueFS.cc: 2340: FAILED ceph_assert(r == 0)\n",
    "assert_thread_name": "ceph-osd",
    "backtrace": [
        "/lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f797e329140]",
        "gsignal()",
        "abort()",
        "(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x16e) [0x55aea579e65c]",
        "/usr/bin/ceph-osd(+0xab879d) [0x55aea579e79d]",
        "(BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)+0x1a4d) [0x55aea5e9801d]",
        "(BlueFS::_flush(BlueFS::FileWriter*, bool, std::unique_lock<std::mutex>&)+0x67) [0x55aea5e98287]",
        "(BlueRocksWritableFile::Append(rocksdb::Slice const&)+0x100) [0x55aea5ead0c0]",
        "(rocksdb::LegacyWritableFileWrapper::Append(rocksdb::Slice const&, rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x48) [0x55aea636c1ae]",
        "(rocksdb::WritableFileWriter::WriteBuffered(char const*, unsigned long)+0x338) [0x55aea6546f08]",
        "(rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x5d7) [0x55aea654548b]",
        "(rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0x11d) [0x55aea670f4c7]",
        "(rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice const&, rocksdb::BlockHandle*, bool)+0x7d0) [0x55aea670f2ae]",
        "(rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*, rocksdb::BlockHandle*, bool)+0x48) [0x55aea670eaca]",
        "(rocksdb::BlockBasedTableBuilder::Flush()+0x9a) [0x55aea670ea7a]",
        "(rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Slice const&)+0x197) [0x55aea670e5af]",
        "(rocksdb::BuildTable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::Env*, rocksdb::FileSystem*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::FileOptions const&, rocksdb::TableCache*, rocksdb::InternalIteratorBase<rocksdb::Slice>*, std::vector<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator, std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> >, std::allocator<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator, std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> > > >, rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&, std::vector<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> >, std::allocator<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> > > > const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned long, std::allocator<unsigned long> >, unsigned long, rocksdb::SnapshotChecker*, rocksdb::CompressionType, unsigned long, rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*, rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int, rocksdb::Env::IOPriority, rocksdb::TableProperties*, int, unsigned long, unsigned long, rocksdb::Env::WriteLifeTimeHint, unsigned long)+0x782) [0x55aea6691922]",
        "(rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0x5ea) [0x55aea640a186]",
        "(rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long, std::allocator<unsigned long> > const&, unsigned long*, bool, bool*)+0x1ad1) [0x55aea6408dfd]",
        "(rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*)+0x159e) [0x55aea6406334]",
        "(rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool)+0x677) [0x55aea640b62d]",
        "(rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0x52) [0x55aea640aa04]",
        "(RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x10a6) [0x55aea631c986]",
        "(BlueStore::_open_db(bool, bool, bool)+0x9c9) [0x55aea5d97cf9]",
        "(BlueStore::_open_db_and_around(bool, bool)+0x332) [0x55aea5de70d2]",
        "(BlueStore::_mount()+0x191) [0x55aea5de9a81]",
        "(OSD::init()+0x58d) [0x55aea589466d]",
        "main()",
        "__libc_start_main()",
        "_start()"
    ],
    "ceph_version": "16.2.4",
    "crash_id": "2021-07-14T23:35:41.251654Z_7f7bd234-3dbe-4b33-a769-49a8d0c1928d",
    "entity_name": "osd.0",
    "os_id": "11",
    "os_name": "Debian GNU/Linux 11 (bullseye)",
    "os_version": "11 (bullseye)",
    "os_version_id": "11",
    "process_name": "ceph-osd",
    "stack_sig": "17cb7e455b1474befda7ccac5034140984020e3b86150b18f13b505d18ab80f9",
    "timestamp": "2021-07-14T23:35:41.251654Z",
    "utsname_hostname": "sh-prox03",
    "utsname_machine": "x86_64",
    "utsname_release": "5.11.22-2-pve",
    "utsname_sysname": "Linux",
    "utsname_version": "#1 SMP PVE 5.11.22-3 (Sun, 11 Jul 2021 13:45:15 +0200)"
 
Looks like I have to wait 16.2.5 because because I don't have an great idea how step backwards to Octopus. I made stupid mistake to ask ceph allow only pacific and I think it's not possible to downgrade.... :rolleyes:

This is situation of mine:
Code:
root@pve2:/dev# ceph crash info 2021-07-19T03:12:11.100540Z_6c0eb1fa-c928-4f3e-9925-e23f459d0139
{
    "assert_condition": "r == 0",
    "assert_file": "./src/os/bluestore/BlueFS.cc",
    "assert_func": "void BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)",
    "assert_line": 2340,
    "assert_msg": "./src/os/bluestore/BlueFS.cc: In function 'void BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)' thread 7f8cdebe4f00 time 2021-07-19T06:12:11.065154+0300\n./src/os/bluestore/BlueFS.cc: 2340: FAILED ceph_assert(r == 0)\n",
    "assert_thread_name": "ceph-osd",
    "backtrace": [
        "/lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f8cdf23c140]",
        "gsignal()",
        "abort()",
        "(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x16e) [0x559aecefe65c]",
        "/usr/bin/ceph-osd(+0xab879d) [0x559aecefe79d]",
        "(BlueFS::_compact_log_async(std::unique_lock<std::mutex>&)+0x1a4d) [0x559aed5f801d]",
        "(BlueFS::_flush(BlueFS::FileWriter*, bool, std::unique_lock<std::mutex>&)+0x67) [0x559aed5f8287]",
        "(BlueRocksWritableFile::Append(rocksdb::Slice const&)+0x100) [0x559aed60d0c0]",
        "(rocksdb::LegacyWritableFileWrapper::Append(rocksdb::Slice const&, rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x48) [0x559aedacc1ae]",
        "(rocksdb::WritableFileWriter::WriteBuffered(char const*, unsigned long)+0x338) [0x559aedca6f08]",
        "(rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x5d7) [0x559aedca548b]",
        "(rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0x11d) [0x559aede6f4c7]",
        "(rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice const&, rocksdb::BlockHandle*, bool)+0x7d0) [0x559aede6f2ae]",
        "(rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*, rocksdb::BlockHandle*, bool)+0x48) [0x559aede6eaca]",
        "(rocksdb::BlockBasedTableBuilder::Flush()+0x9a) [0x559aede6ea7a]",
        "(rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Slice const&)+0x197) [0x559aede6e5af]",
        "(rocksdb::BuildTable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::Env*, rocksdb::FileSystem*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::FileOptions const&, rocksdb::TableCache*, rocksdb::InternalIteratorBase<rocksdb::Slice>*, std::vector<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator, std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> >, std::allocator<std::unique_ptr<rocksdb::FragmentedRangeTombstoneIterator, std::default_delete<rocksdb::FragmentedRangeTombstoneIterator> > > >, rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&, std::vector<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> >, std::allocator<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> > > > const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<unsigned long, std::allocator<unsigned long> >, unsigned long, rocksdb::SnapshotChecker*, rocksdb::CompressionType, unsigned long, rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*, rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int, rocksdb::Env::IOPriority, rocksdb::TableProperties*, int, unsigned long, unsigned long, rocksdb::Env::WriteLifeTimeHint, unsigned long)+0x782) [0x559aeddf1922]",
        "(rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0x5ea) [0x559aedb6a186]",
        "(rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long, std::allocator<unsigned long> > const&, unsigned long*, bool, bool*)+0x1ad1) [0x559aedb68dfd]",
        "(rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*)+0x159e) [0x559aedb66334]",
        "(rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool)+0x677) [0x559aedb6b62d]",
        "(rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0x52) [0x559aedb6aa04]",
        "(RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x10a6) [0x559aeda7c986]",
        "(BlueStore::_open_db(bool, bool, bool)+0x9c9) [0x559aed4f7cf9]",
        "(BlueStore::_open_db_and_around(bool, bool)+0x332) [0x559aed5470d2]",
        "(BlueStore::_mount()+0x191) [0x559aed549a81]",
        "(OSD::init()+0x58d) [0x559aecff466d]",
        "main()",
        "__libc_start_main()",
        "_start()"
    ],
    "ceph_version": "16.2.4",
    "crash_id": "2021-07-19T03:12:11.100540Z_6c0eb1fa-c928-4f3e-9925-e23f459d0139",
    "entity_name": "osd.1",
    "os_id": "11",
    "os_name": "Debian GNU/Linux 11 (bullseye)",
    "os_version": "11 (bullseye)",
    "os_version_id": "11",
    "process_name": "ceph-osd",
    "stack_sig": "17cb7e455b1474befda7ccac5034140984020e3b86150b18f13b505d18ab80f9",
    "timestamp": "2021-07-19T03:12:11.100540Z",
    "utsname_hostname": "pve2",
    "utsname_machine": "x86_64",
    "utsname_release": "5.11.22-2-pve",
    "utsname_sysname": "Linux",
    "utsname_version": "#1 SMP PVE 5.11.22-3 (Sun, 11 Jul 2021 13:45:15 +0200)"
}
 
Pardon me... Where do I need to put this "bluestore_allocator = bitmap" that it work like it should be? :oops::)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!