Ceph 16.2.7 Pacific cluster Crash

cyp

Renowned Member
Feb 9, 2015
20
1
68
Hi all,

Few days after an Octopus to Pacific upgrade, I have a crashed Ceph Cluster.
Most of the OSD are down (6 on 8) and crash on start.
Seems like a lot to https://forum.proxmox.com/threads/ceph-16-2-pacific-cluster-crash.92367/ but switch bluestore_allocator an bluefs_allocator to bitmap mode do not help.

Truncated (too long to be posted) log output from a crashed OSD below.

Any advice?


Code:
janv. 06 14:53:08 pve11 systemd[1]: Starting Ceph object storage daemon osd.0...
janv. 06 14:53:08 pve11 systemd[1]: Started Ceph object storage daemon osd.0.
janv. 06 14:53:19 pve11 ceph-osd[24802]: 2022-01-06T14:53:19.214+0100 7f2d01c05f00 -1 bluefs _allocate allocation failed, needed 0x8025e
janv. 06 14:53:19 pve11 ceph-osd[24802]: 2022-01-06T14:53:19.214+0100 7f2d01c05f00 -1 bluefs _flush_range allocated: 0x0 offset: 0x0 length: 0x8025e
janv. 06 14:53:19 pve11 ceph-osd[24802]: ./src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range(BlueFS::FileWriter*, uint64_t, uint64_t)' thread 7f2d01c05f00 time 2022-01-06T14:53:19.219438+0100
janv. 06 14:53:19 pve11 ceph-osd[24802]: ./src/os/bluestore/BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
janv. 06 14:53:19 pve11 ceph-osd[24802]:  ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)
janv. 06 14:53:19 pve11 ceph-osd[24802]:  1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xd3) [0x564c8796f0df]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  2: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned long)+0x9bd) [0x564c88057bbd]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  3: (BlueFS::_flush(BlueFS::FileWriter*, bool, bool*)+0x9a) [0x564c880581ca]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  4: (BlueFS::_flush(BlueFS::FileWriter*, bool, std::unique_lock<std::mutex>&)+0x2f) [0x564c8806945f]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  5: (BlueRocksWritableFile::Append(rocksdb::Slice const&)+0x100) [0x564c880817d0]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  6: (rocksdb::LegacyWritableFileWrapper::Append(rocksdb::Slice const&, rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x48) [0x564c8854824e]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  7: (rocksdb::WritableFileWriter::WriteBuffered(char const*, unsigned long)+0x338) [0x564c88722d18]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  8: (rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x5d7) [0x564c8872129b]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  9: (rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0x11d) [0x564c888eb2d7]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  10: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice const&, rocksdb::BlockHandle*, bool)+0x7d0) [0x564c888eb0be]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  11: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*, rocksdb::BlockHandle*, bool)+0x48) [0x564c888ea8da]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  12: (rocksdb::BlockBasedTableBuilder::Flush()+0x9a) [0x564c888ea88a]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  13: (rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Slice const&)+0x197) [0x564c888ea3bf]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  14: (rocksdb::BuildTable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::Env*, rocksdb::FileSystem*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::FileOptions const&, rocksdb::TableCache*, ro>
janv. 06 14:53:19 pve11 ceph-osd[24802]:  15: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0x5ea) [0x564c885e6226]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  16: (rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long, std::allocator<unsigned long> > const&, unsigned long*, bool, bool*)+0x1ad1) [0x564c885e4e9d]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  17: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*)+0x159e) [0x564c885e23d4]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  18: (rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnF>
janv. 06 14:53:19 pve11 ceph-osd[24802]:  19: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamil>
janv. 06 14:53:19 pve11 ceph-osd[24802]:  20: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x10a6) [0x564c884f78b6]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  21: (BlueStore::_open_db(bool, bool, bool)+0xa19) [0x564c87f75b19]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  22: (BlueStore::_open_db_and_around(bool, bool)+0x332) [0x564c87fbab92]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  23: (BlueStore::_mount()+0x191) [0x564c87fbd531]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  24: (OSD::init()+0x58d) [0x564c87a645ed]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  25: main()
janv. 06 14:53:19 pve11 ceph-osd[24802]:  26: __libc_start_main()
janv. 06 14:53:19 pve11 ceph-osd[24802]:  27: _start()
janv. 06 14:53:19 pve11 ceph-osd[24802]: *** Caught signal (Aborted) **
janv. 06 14:53:19 pve11 ceph-osd[24802]:  in thread 7f2d01c05f00 thread_name:ceph-osd
janv. 06 14:53:19 pve11 ceph-osd[24802]: 2022-01-06T14:53:19.234+0100 7f2d01c05f00 -1 ./src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range(BlueFS::FileWriter*, uint64_t, uint64_t)' thread 7f2d01c05f00 time 2022-01-06T14:53:19.219438+0100
janv. 06 14:53:19 pve11 ceph-osd[24802]: ./src/os/bluestore/BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
janv. 06 14:53:19 pve11 ceph-osd[24802]:  ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)
janv. 06 14:53:19 pve11 ceph-osd[24802]:  1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0xd3) [0x564c8796f0df]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  2: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned long)+0x9bd) [0x564c88057bbd]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  3: (BlueFS::_flush(BlueFS::FileWriter*, bool, bool*)+0x9a) [0x564c880581ca]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  4: (BlueFS::_flush(BlueFS::FileWriter*, bool, std::unique_lock<std::mutex>&)+0x2f) [0x564c8806945f]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  5: (BlueRocksWritableFile::Append(rocksdb::Slice const&)+0x100) [0x564c880817d0]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  6: (rocksdb::LegacyWritableFileWrapper::Append(rocksdb::Slice const&, rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x48) [0x564c8854824e]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  7: (rocksdb::WritableFileWriter::WriteBuffered(char const*, unsigned long)+0x338) [0x564c88722d18]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  8: (rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x5d7) [0x564c8872129b]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  9: (rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0x11d) [0x564c888eb2d7]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  10: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice const&, rocksdb::BlockHandle*, bool)+0x7d0) [0x564c888eb0be]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  11: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*, rocksdb::BlockHandle*, bool)+0x48) [0x564c888ea8da]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  12: (rocksdb::BlockBasedTableBuilder::Flush()+0x9a) [0x564c888ea88a]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  13: (rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Slice const&)+0x197) [0x564c888ea3bf]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  14: (rocksdb::BuildTable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::Env*, rocksdb::FileSystem*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::FileOptions const&, rocksdb::TableCache*, ro>
janv. 06 14:53:19 pve11 ceph-osd[24802]:  15: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0x5ea) [0x564c885e6226]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  16: (rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long, std::allocator<unsigned long> > const&, unsigned long*, bool, bool*)+0x1ad1) [0x564c885e4e9d]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  17: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool, unsigned long*)+0x159e) [0x564c885e23d4]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  18: (rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnF>
janv. 06 14:53:19 pve11 ceph-osd[24802]:  19: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamil>
janv. 06 14:53:19 pve11 ceph-osd[24802]:  20: (RocksDBStore::do_open(std::ostream&, bool, bool, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x10a6) [0x564c884f78b6]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  21: (BlueStore::_open_db(bool, bool, bool)+0xa19) [0x564c87f75b19]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  22: (BlueStore::_open_db_and_around(bool, bool)+0x332) [0x564c87fbab92]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  23: (BlueStore::_mount()+0x191) [0x564c87fbd531]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  24: (OSD::init()+0x58d) [0x564c87a645ed]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  25: main()
janv. 06 14:53:19 pve11 ceph-osd[24802]:  26: __libc_start_main()
janv. 06 14:53:19 pve11 ceph-osd[24802]:  27: _start()
janv. 06 14:53:19 pve11 ceph-osd[24802]:  ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)
janv. 06 14:53:19 pve11 ceph-osd[24802]:  1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140) [0x7f2d0225d140]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  2: gsignal()
janv. 06 14:53:19 pve11 ceph-osd[24802]:  3: abort()
janv. 06 14:53:19 pve11 ceph-osd[24802]:  4: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x18a) [0x564c8796f196]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  5: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned long)+0x9bd) [0x564c88057bbd]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  6: (BlueFS::_flush(BlueFS::FileWriter*, bool, bool*)+0x9a) [0x564c880581ca]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  7: (BlueFS::_flush(BlueFS::FileWriter*, bool, std::unique_lock<std::mutex>&)+0x2f) [0x564c8806945f]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  8: (BlueRocksWritableFile::Append(rocksdb::Slice const&)+0x100) [0x564c880817d0]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  9: (rocksdb::LegacyWritableFileWrapper::Append(rocksdb::Slice const&, rocksdb::IOOptions const&, rocksdb::IODebugContext*)+0x48) [0x564c8854824e]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  10: (rocksdb::WritableFileWriter::WriteBuffered(char const*, unsigned long)+0x338) [0x564c88722d18]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  11: (rocksdb::WritableFileWriter::Append(rocksdb::Slice const&)+0x5d7) [0x564c8872129b]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  12: (rocksdb::BlockBasedTableBuilder::WriteRawBlock(rocksdb::Slice const&, rocksdb::CompressionType, rocksdb::BlockHandle*, bool)+0x11d) [0x564c888eb2d7]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  13: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::Slice const&, rocksdb::BlockHandle*, bool)+0x7d0) [0x564c888eb0be]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  14: (rocksdb::BlockBasedTableBuilder::WriteBlock(rocksdb::BlockBuilder*, rocksdb::BlockHandle*, bool)+0x48) [0x564c888ea8da]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  15: (rocksdb::BlockBasedTableBuilder::Flush()+0x9a) [0x564c888ea88a]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  16: (rocksdb::BlockBasedTableBuilder::Add(rocksdb::Slice const&, rocksdb::Slice const&)+0x197) [0x564c888ea3bf]
janv. 06 14:53:19 pve11 ceph-osd[24802]:  17: (rocksdb::BuildTable(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb::Env*, rocksdb::FileSystem*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::FileOptions const&, rocksdb::TableCache*, ro>
janv. 06 14:53:19 pve11 ceph-osd[24802]:  18: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0x5ea) [0x564c885e6226]
...
 
Hi,

Sounds like we're on the same ship...

Code:
Jan 16 02:30:17 pve3 ceph-osd[33049]:  27: (OSD::init()+0x58d) [0x558dfdc8e5ed]
Jan 16 02:30:17 pve3 ceph-osd[33049]:  28: main()
Jan 16 02:30:17 pve3 ceph-osd[33049]:  29: __libc_start_main()
Jan 16 02:30:17 pve3 ceph-osd[33049]:  30: _start()
Jan 16 02:30:17 pve3 ceph-osd[33049]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Jan 16 02:30:17 pve3 ceph-osd[33049]:     -3> 2022-01-16T02:30:17.266+0100 7fb820318f00 -1 bluefs _allocate allocation failed, needed 0x8025e
Jan 16 02:30:17 pve3 ceph-osd[33049]:     -2> 2022-01-16T02:30:17.266+0100 7fb820318f00 -1 bluefs _flush_range allocated: 0x0 offset: 0x0 length: 0x8025e
Jan 16 02:30:17 pve3 ceph-osd[33049]:     -1> 2022-01-16T02:30:17.278+0100 7fb820318f00 -1 ./src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range(BlueFS::
FileWriter*, uint64_t, uint64_t)' thread 7fb820318f00 time 2022-01-16T02:30:17.272586+0100
Jan 16 02:30:17 pve3 ceph-osd[33049]: ./src/os/bluestore/BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")
Jan 16 02:30:17 pve3 ceph-osd[33049]:  ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)

Trigger:
- I had a usage of about 93% on a cephfs and 87% on a rbd.
- A Proxmox "snapshot" backup job triggered at 1am. (edit: check the bug report below, the problem was already present before the snapshot)
- And one of the node came with a load of 100. After 1h waiting, I had to unplug the power to the proxmox server back.
(The backup target is not cephfs. The only thing related to ceph might be a snapshot for the backup, on anyway almost unused CT. So that's really unexpected behavior)

=> 3 nodes cluster down for 2h with the same behavior on all 3 nodes and no clue on how to recover the situation.

How to get back online ?
 
Last edited:
Additional info:

Code:
root@pve1:~# ceph status
  cluster:
    id:     e7628d51-32b5-4f5c-8eec-1cafb41ead74
    health: HEALTH_WARN
            1 filesystem is degraded
            1 MDSs report slow metadata IOs
            mon pve3 is low on available space
            2 osds down
            3 hosts (3 osds) down
            1 root (3 osds) down
            Reduced data availability: 82 pgs inactive
            12 daemons have recently crashed
 
  services:
    mon: 3 daemons, quorum pve1,pve3,pve2 (age 8h)
    mgr: pve1(active, since 9h), standbys: pve3, pve2
    mds: 1/1 daemons up, 2 standby
    osd: 3 osds: 0 up (since 8h), 2 in (since 10h)
 
  data:
    volumes: 0/1 healthy, 1 recovering
    pools:   4 pools, 82 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             82 unknown

The interesting line here is :
osd: 3 osds: 0 up (since 8h), 2 in (since 10h)

At that time:

Code:
Jan 16 00:48:33 pve1 ceph-osd[2157]: 2022-01-16T00:48:33.904+0100 7f3b78139700 -1 bluefs _allocate allocation failed, needed 0x6ff1
Jan 16 00:48:33 pve1 ceph-osd[2157]: 2022-01-16T00:48:33.904+0100 7f3b78139700 -1 bluefs _flush_range allocated: 0x8720000 offset: 0x871f8f4 length: 0x76fd
Jan 16 00:48:33 pve1 ceph-osd[2157]: ./src/os/bluestore/BlueFS.cc: In function 'int BlueFS::_flush_range(BlueFS::FileWriter*, uint64_t, uint64_t)' thread 7f3b78139700
time 2022-01-16T00:48:33.909493+0100
Jan 16 00:48:33 pve1 ceph-osd[2157]: ./src/os/bluestore/BlueFS.cc: 2768: ceph_abort_msg("bluefs enospc")

- So it seams that one of the OSD (osd.1) became full.
- Ceph decided to kill the process.
- Ceph decided to rebalance PGs after some time, and filled the few space remaining on the 2 other OSDs, leading to the same "bluefs enospc" on all nodes.

I'm still looking for a solution. But I think I 'll try to add an external OSD with free disk space on the cluster and see what happen. Any better advice ?
 
More details in the bug report : https://tracker.ceph.com/issues/53899

At the end of the bug report, you'll see that I did find a solution to get out of this problem by extending the underlying LV. (I run bluestore on LVM).
But really no clue (yet) on how to recover from this situation on physical devices that cannot be extended.

My 2 cents, as long as this bug is not fixed: Take some contension, and never ever go over 90% of the n-1 hosts capacity if you have min replicas set to 2 (default). Even temporarly. So, basically max 60% use on a 3 nodes cluster... Because when your in this situation where all OSDs are failing to start, it's too late to just add some extra OSD with extra capacity...

I look forward getting more insight from this bug, but from my researches it seems that it occured to other people on NVME. Since I found few posts with this behaviour, and since NVME is typically the boot drive, this might also be related to bluestore on LVM.

I feel lucky.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!