Adding SSDs to improve Ceph performance

breakaway9000

Renowned Member
Dec 20, 2015
91
21
73
Hi all. We've got a 3-node ceph cluster that contains 4 x 6 TB SATA drives experiencing very poor I/O write speeds. I do have WAL and block.db on fast SSDs but they are too small for our workload.

So I've installed one enterprise SSD (1.2TB) in each host. First off, I am wanting to create a brand new pool (new OSDs) with JUST these SSDs and run some performance testing on it, just to see what performance will be like with a exclusively ssd-backed pool.

I've already got data on this pool (see attached). Our data resides on a ceph pool that resides on this.

I am wanting to setup a new pool (just with 1 ssd OSD per system) for testing. I've found this thread and this guide

https://forum.proxmox.com/threads/multiple-ceph-pools-possible.37905/
https://ceph.com/community/new-luminous-crush-device-classes/

Am I correct in assuming doing this will not cause any issues with the data that's on the pool right now?

pool.pngpools.png
 
I am wanting to setup a new pool (just with 1 ssd OSD per system) for testing. I've found this thread and this guide
Have you checked out our documentation?
https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_device_classes

Am I correct in assuming doing this will not cause any issues with the data that's on the pool right now?
You will need to create a separate rule for HDDs first. Otherwise the SSD OSDs will be used by the default rule as well.
 
Hi @Alwin yes, that looks exactly like what I need. I just went over that doc. The doc says

If the pool already contains objects, all of these have to be moved accordingly. Depending on your setup this may introduce a big performance hit on your cluster. As an alternative, you can create a new pool and move disks separately.

Perfect - a new SSD-only pool is exactly what I'm after. So to make this work I need to create a new rule, like so:

Code:
ceph osd crush rule create-replicated replicated-ssd default-ssd host ssd

That will give me a "default-ssd" root node in my ceph configuration. Now I need to add the new SSDs in. I will then do a "ceph osd set noout" in order to prevent the cluster from rebalancing. Then, I can add the OSDs the existing pool. Finally, I need to set the device type as ssd using "ceph osd crush rm-device-class osd.N" and then "ceph osd crush set-device-class ssd osd.N" - at this point, I should now see the SSDs under "default-ssd" , and should then be able to go to Pools > Create in the WebGUI, setup a new pool with a new name and in the "Crush Rule" dropdown I pick "replicated-ssd".

Does that sound sane? Thanks.

Edit: Am I correct in assuming the above steps are all that are needed, and no manual editing of the CRUSH map is required?
 
That will give me a "default-ssd" root node in my ceph configuration.
Use default. Otherwise you will need a new crush tree as well.

Edit: Am I correct in assuming the above steps are all that are needed, and no manual editing of the CRUSH map is required?
The steps in the documentation will create the device based crush rules. Existing pools need to be told to use the rule then.
 
Ok - I think I've got it. Just tested it in my lab. If this looks ok @Alwin could you mark as solution.

  1. First, I created a new ceph replication rule (The default one is simply called "replicated_rule"), and classify it for HDDs only:
    Code:
    ceph osd crush rule create-replicated replicated-hdd default host hdd
  2. Next, tell the system to use this pool. Ceph will move pgs around automatically (So be careful if you've got a lot of data)
    Note: replace ceph_lab with the name of your existing pool
    Code:
    ceph osd pool set ceph_lab crush_rule replicated-hdd
  3. Set noout (you can do this from the gui if you wish). The reason for this is to prevent ceph from rebalancing & eating up the new SSD space I'm going to add in the next few steps:
    Code:
    ceph osd set noout
  4. Create a new rule that uses SSDs only
    Code:
    ceph osd crush rule create-replicated replicated-ssd default host ssd
  5. Now add the OSDs. This can be done via the GUI or via the CLI. This needs to be done on each of your hosts.
    Code:
    pveceph osd create /dev/<devicename>
    After adding the 1st ssd, check that it is correctly classified as SSD. If not, you will have to classify it manually according to the documentation. In my case it won't be picked up as ssd because it's a fusionIO device (which is a PCI-e MLC SSD). Info on how to do this is available here: https://ceph.com/community/new-luminous-crush-device-classes/
  6. Now, setup your new device class rule:
    Code:
    ceph osd crush rule create-replicated replicated-ssd default host ssd
  7. Use the ceph osd crush tree --show-shadow command to verify that all the new devices added are classified correctly, and a separate root node showing your ssd/nvme devices.
  8. Finally, go back to proxmox and go to ceph > pools > create, and now in here select crush rule "replicated-ssd" (or whatever you named your rule). Set the requiremetns depending on your needs, and tick the 'add storage' box so it's ready to roll.
  9. Unset noout.
 
Set noout (you can do this from the gui if you wish). The reason for this is to prevent ceph from rebalancing & eating up the new SSD space I'm going to add in the next few steps:
This should not be needed, if all pools are on the new rule to be located on HDD only. Otherwise there might be either not enough space on the HDD (over spill) or there is still a pool using the old default.
 
Hi, so I finally got around to doing this. Unfortunately, performance is not as good as I'd expect it to be, especially in synthetic benchmarks:

This is the new SSD pool (it is comprised of 4 x 1.2 TiB FusionIO devices that in my testing can easily deliver tens of thousands of IOPS and 500+ MB/s sustained write speeds).

I've already tested my netowrk as well using iperf - I'm getting 9.5-10gbps on it.

Any ideas why this could be?

Code:
$ rados bench -p ceph_ssd 10 write
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_host1_74617
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        49        33   131.987       132    0.283183    0.391605
    2      16        74        58   115.986       100    0.773622    0.415928
    3      16        97        81   107.987        92     0.41316    0.521707
    4      16       123       107   106.987       104    0.139273    0.535483
    5      16       151       135   107.988       112    0.492034    0.498026
    6      16       175       159   105.988        96    0.161966     0.53469
    7      16       199       183   104.559        96    0.224022    0.531794
    8      16       222       206   102.988        92      1.4028    0.558618
    9      16       239       223   99.0995        68   0.0961683    0.592725
   10      16       257       241   96.3886        72    0.486528    0.611957
Total time run:         10.710376
Total writes made:      258
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     96.3552
Stddev Bandwidth:       18.3255
Max bandwidth (MB/sec): 132
Min bandwidth (MB/sec): 68
Average IOPS:           24
Stddev IOPS:            4
Max IOPS:               33
Min IOPS:               17
Average Latency(s):     0.652833
Stddev Latency(s):      0.629074
Max latency(s):         3.72853
Min latency(s):         0.0817332
Cleaning up (deleting benchmark objects)
Removed 258 objects
Clean up completed and total clean up time :0.297036

And my old HDD pool:

Code:
$ rados bench -p ceph_hdd 10 write
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_host1_74834
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        38        22   87.9931        88    0.691052    0.431021
    2      16        68        52   103.987       120    0.211935    0.551318
    3      16       100        84   111.987       128    0.180198    0.516553
    4      16       124       108   107.988        96     1.30741    0.528048
    5      16       151       135   107.987       108    0.206332    0.549671
    6      16       179       163   108.654       112    0.426261    0.565325
    7      16       200       184    105.13        84      1.1389    0.559251
    8      16       233       217   108.487       132    0.771014    0.565431
    9      16       265       249   110.653       128    0.415316    0.566697
   10      16       291       275   109.987       104    0.211652    0.561007
Total time run:         10.952879
Total writes made:      292
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     106.639
Stddev Bandwidth:       17.101
Max bandwidth (MB/sec): 132
Min bandwidth (MB/sec): 84
Average IOPS:           26
Stddev IOPS:            4
Max IOPS:               33
Min IOPS:               21
Average Latency(s):     0.584411
Stddev Latency(s):      0.35224
Max latency(s):         1.72016
Min latency(s):         0.0709918
Cleaning up (deleting benchmark objects)
Removed 292 objects
Clean up completed and total clean up time :0.629426
 
And iperf tests on the cluster

Code:
Server listening on TCP port 5001
Binding to local address 10.10.10.5
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 10.10.10.5 port 5001 connected with 10.10.10.1 port 40912
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  10.9 GBytes  9.40 Gbits/sec
[  5] local 10.10.10.5 port 5001 connected with 10.10.10.2 port 35614
[  5]  0.0-10.0 sec  10.9 GBytes  9.35 Gbits/sec
[  4] local 10.10.10.5 port 5001 connected with 10.10.10.3 port 39204
[  4]  0.0-10.0 sec  10.9 GBytes  9.32 Gbits/sec
[  5] local 10.10.10.5 port 5001 connected with 10.10.10.4 port 47558
[  5]  0.0-10.0 sec  10.9 GBytes  9.35 Gbits/sec
 
And my ceph topology:

Code:
# ceph osd dump
epoch 3675
fsid 5524ca13-287b-46aa-a302-9b1853a5fb25
created 2018-03-17 17:03:08.615625
modified 2020-06-08 15:53:13.325398
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 145
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release luminous
pool 9 'ceph_hdd' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 512 pgp_num 512 last_change 3099 flags hashpspool stripe_width 0 application rbd
        removed_snaps [1~3,9~4,f~15,25~4,2a~4f]
pool 14 'ceph_ssd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64
pgp_num 64 last_change 3674 flags hashpspool stripe_width 0 application rbd
        removed_snaps [1~3]
max_osd 13
osd.0 up   in  weight 1 up_from 2584 up_thru 3667 down_at 2574 last_clean_interval [2319,2573) 172.17.1.50:6801/3484 10.10.10.1:6801/3484 10.10.10.1:6803/3484 172.17.1.50:6803/3484 exists,up 94be3a75-ffdf-42ca-86de-1e0c16f8bf76
osd.1 up   in  weight 1 up_from 2629 up_thru 3667 down_at 2627 last_clean_interval [2578,2626) 172.17.1.50:6804/667125 10.10.10.1:6804/667125 10.10.10.1:6805/667125 172.17.1.50:6805/667125 exists,up 7e0dcd82-a602-43bc-af8e-e3158ae7332a
osd.2 up   in  weight 1 up_from 2580 up_thru 3667 down_at 2574 last_clean_interval [2323,2573) 172.17.1.50:6800/3638 10.10.10.1:6800/3638 10.10.10.1:6802/3638 172.17.1.50:6802/3638 exists,up b93a6236-5e9c-4ed5-a3ad-1e267a4b7bef
osd.3 up   in  weight 1 up_from 3037 up_thru 3667 down_at 3034 last_clean_interval [2675,3028) 172.17.1.52:6801/4111704 10.10.10.3:6804/4111704 10.10.10.3:6805/4111704 172.17.1.52:6805/4111704 exists,up b9003899-49cf-41bf-bfb0-f32a2c3fa1ea
osd.4 up   in  weight 1 up_from 3043 up_thru 3667 down_at 3031 last_clean_interval [2688,3042) 172.17.1.52:6802/4025823 10.10.10.3:6806/5025823 10.10.10.3:6807/5025823 172.17.1.52:6806/5025823 exists,up 8729d950-d123-4a2f-aa30-2c2674a495cc
osd.5 up   in  weight 1 up_from 3043 up_thru 3667 down_at 3029 last_clean_interval [2690,3042) 172.17.1.52:6800/4028847 10.10.10.3:6802/5028847 10.10.10.3:6803/5028847 172.17.1.52:6803/5028847 exists,up e83ca331-3404-4501-b70c-89fe0f62c6d9
osd.6 up   in  weight 1 up_from 2525 up_thru 3667 down_at 2522 last_clean_interval [2473,2521) 172.17.1.54:6800/3668 10.10.10.5:6800/3668 10.10.10.5:6801/3668 172.17.1.54:6801/3668 exists,up 4138a4f0-f450-4f32-8f3c-3d31b068de6b
osd.7 up   in  weight 1 up_from 2605 up_thru 3667 down_at 2603 last_clean_interval [2529,2602) 172.17.1.54:6802/3592433 10.10.10.5:6802/3592433 10.10.10.5:6803/3592433 172.17.1.54:6803/3592433 exists,up eed5435a-a696-4e37-bb4f-31b6e871b486
osd.8 up   in  weight 1 up_from 2527 up_thru 3667 down_at 2522 last_clean_interval [2474,2521) 172.17.1.54:6804/3787 10.10.10.5:6804/3787 10.10.10.5:6805/3787 172.17.1.54:6805/3787 exists,up bc84595f-b8e6-4d9c-8e10-570c9550d041
osd.9 up   in  weight 1 up_from 3637 up_thru 3663 down_at 0 last_clean_interval [0,0) 172.17.1.50:6806/4178391 10.10.10.1:6806/4178391 10.10.10.1:6807/4178391 172.17.1.50:6807/4178391 exists,up 4b22a075-26e9-4eca-96f3-5e46ebcd61ed
osd.10 up   in  weight 1 up_from 3640 up_thru 3667 down_at 0 last_clean_interval [0,0) 172.17.1.51:6800/614354 10.10.10.2:6800/614354 10.10.10.2:6801/614354 172.17.1.51:6801/614354 exists,up 51fcc403-7e5f-4e53-a412-94c44d53e294
osd.11 up   in  weight 1 up_from 3643 up_thru 3657 down_at 0 last_clean_interval [0,0) 172.17.1.53:6800/3404422 10.10.10.4:6800/3404422 10.10.10.4:6801/3404422 172.17.1.53:6801/3404422 exists,up c645e67f-4613-4361-b33b-8fafef3d2f7a
osd.12 up   in  weight 1 up_from 3646 up_thru 3667 down_at 0 last_clean_interval [0,0) 172.17.1.54:6807/1396991 10.10.10.5:6806/1396991 10.10.10.5:6807/1396991 172.17.1.54:6808/1396991 exists,up dd554288-a855-453f-b146-1cabe0b79b9c

Code:
# ceph osd crush tree --show-shadow
ID  CLASS WEIGHT   TYPE NAME
-12   ssd  4.38354 root default~ssd
-11   ssd  1.09589     host host1~ssd
  9   ssd  1.09589         osd.9
-15   ssd  1.09589     host host2~ssd
10   ssd  1.09589         osd.10
-9   ssd        0     host host3~ssd
-18   ssd  1.09589     host host4~ssd
11   ssd  1.09589         osd.11
-10   ssd  1.09589     host host5~ssd
12   ssd  1.09589         osd.12
-8   hdd 49.12193 root default~hdd
-7   hdd 16.37398     host host1~hdd
  0   hdd  5.45799         osd.0
  1   hdd  5.45799         osd.1
  2   hdd  5.45799         osd.2
-14   hdd        0     host host2~hdd
-2   hdd 16.37398     host host3~hdd
  3   hdd  5.45799         osd.3
  4   hdd  5.45799         osd.4
  5   hdd  5.45799         osd.5
-17   hdd        0     host host4~hdd
-4   hdd 16.37398     host host5~hdd
  6   hdd  5.45799         osd.6
  7   hdd  5.45799         osd.7
  8   hdd  5.45799         osd.8
-1       53.50548 root default
-6       17.46986     host host1
  0   hdd  5.45799         osd.0
  1   hdd  5.45799         osd.1
  2   hdd  5.45799         osd.2
  9   ssd  1.09589         osd.9
-13        1.09589     host host2
10   ssd  1.09589         osd.10
-3       16.37398     host host3
  3   hdd  5.45799         osd.3
  4   hdd  5.45799         osd.4
  5   hdd  5.45799         osd.5
-16        1.09589     host host4
11   ssd  1.09589         osd.11
-5       17.46986     host host5
  6   hdd  5.45799         osd.6
  7   hdd  5.45799         osd.7
  8   hdd  5.45799         osd.8
12   ssd  1.09589         osd.12

Crush Map:

Code:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class hdd
device 7 osd.7 class hdd
device 8 osd.8 class hdd
device 9 osd.9 class ssd
device 10 osd.10 class ssd
device 11 osd.11 class ssd
device 12 osd.12 class ssd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host host3 {
        id -3           # do not change unnecessarily
        id -2 class hdd         # do not change unnecessarily
        id -9 class ssd         # do not change unnecessarily
        # weight 16.374
        alg straw2
        hash 0  # rjenkins1
        item osd.3 weight 5.458
        item osd.4 weight 5.458
        item osd.5 weight 5.458
}

host host5 {
        id -5           # do not change unnecessarily
        id -4 class hdd         # do not change unnecessarily
        id -10 class ssd                # do not change unnecessarily
        # weight 17.470
        alg straw2
        hash 0  # rjenkins1
        item osd.6 weight 5.458
        item osd.7 weight 5.458
        item osd.8 weight 5.458
        item osd.12 weight 1.096
}

host host1 {
        id -6           # do not change unnecessarily
        id -7 class hdd         # do not change unnecessarily
        id -11 class ssd                # do not change unnecessarily
        # weight 17.470
        alg straw2
        hash 0  # rjenkins1
        item osd.0 weight 5.458
        item osd.1 weight 5.458
        item osd.2 weight 5.458
        item osd.9 weight 1.096
}

host host2 {
        id -13          # do not change unnecessarily
        id -14 class hdd                # do not change unnecessarily
        id -15 class ssd                # do not change unnecessarily
        # weight 1.096
        alg straw2
        hash 0  # rjenkins1
        item osd.10 weight 1.096


host host4 {
        id -16          # do not change unnecessarily
        id -17 class hdd                # do not change unnecessarily
        id -18 class ssd                # do not change unnecessarily
        # weight 1.096
        alg straw2
        hash 0  # rjenkins1
        item osd.11 weight 1.096
}

root default {
        id -1           # do not change unnecessarily
        id -8 class hdd         # do not change unnecessarily
        id -12 class ssd                # do not change unnecessarily
        # weight 53.505
        alg straw2
        hash 0  # rjenkins1
        item host3 weight 16.374
        item host5 weight 17.470
        item host1 weight 17.470
        item host2 weight 1.096
        item host4 weight 1.096
}

# rules
rule replicated_rule {
        id 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}
rule replicated-hdd {
        id 1
        type replicated
        min_size 1
        max_size 10
        step take default class hdd
        step chooseleaf firstn 0 type host
        step emit
}
rule replicated-ssd {
        id 2
        type replicated
        min_size 1
        max_size 10
        step take default class ssd
        step chooseleaf firstn 0 type host
        step emit
}

# end crush map
 
Last edited:
Your ceph_ssd pool still uses the default rule and not the replicated-ssd
 
Hi Alwin,

Could you please point out how you arrived at that conclusion?

As per post #5, I created the ceph pool with this command

Code:
ceph osd crush rule create-replicated replicated-ssd default host ssd

and then by adding the ssds

Code:
pveceph osd create /dev/<devicename>

Then creating a pool

Code:
ceph osd crush rule create-replicated replicated-ssd default host ssd

And then finally by going to Proxmox interface and typing going to Ceph > Pools > Create and adding a new pool, while selecting replicated-ssd - did I miss any steps?
 
@Alwyn - I just ran

Code:
ceph pg ls-by-pool <my pool name>

And I checked the up, primary and acting columns - these all reference the OSDs that host the pool, and they're all our OSDs.

Am I missing something? Looks correct to me
 
Hi Alwin - I see what you mean now. I updated the rule by doing a

Code:
 ceph osd pool set ceph_ssd crush_rule replicated-ssd
as per the documentation.

The output of ceph osd dump now shows that the ssd pool is using rule id 2 - I was expecting this to trigger a rebalance of some sort, but it doesn't look like anything has been "moved" - do I need to do anything else to move my data to this pool? I have just one test VM on this, so I don't mind destroying it and recreating it if I have to.

Code:
epoch 3855
fsid 5526ca13-287b-46ff-a302-9b1853a5fb25
created 2018-03-17 17:03:08.615625
modified 2020-07-02 16:10:04.052387
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 145
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release luminous
pool 9 'ceph_hdd' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 512 pgp_num 512 last_change 3838 flags hashpspool stripe_width 0 application rbd
        removed_snaps [1~3,9~4,f~15,25~4,2a~6d]
pool 15 'ceph_ssd' replicated size 2 min_size 1 crush_rule 2 object_hash rjenkins pg_num 12 pgp_num 12 last_change 3855 flags hashpspool stripe_width 0 application rbd
        removed_snaps [1~3]
max_osd 13
osd.0 up   in  weight 1 up_from 2584 up_thru 3853 down_at 2574 last_clean_interval [2319,2573) 172.17.1.50:6801/3484 10.10.10.1:6801/3484 10.10.10.1:6803/3484 172.17.1.50:6803/3484 exists,up 94be3a75-ffdf-42ca-86de-1e0c16f8bf76
osd.1 up   in  weight 1 up_from 3765 up_thru 3853 down_at 3763 last_clean_interval [2629,3762) 172.17.1.50:6804/756211 10.10.10.1:6804/756211 10.10.10.1:6805/756211 172.17.1.50:6805/756211 exists,up 7e0dcd82-a602-43bc-af8e-e3158ae7332a
osd.2 up   in  weight 1 up_from 2580 up_thru 3853 down_at 2574 last_clean_interval [2323,2573) 172.17.1.50:6800/3638 10.10.10.1:6800/3638 10.10.10.1:6802/3638 172.17.1.50:6802/3638 exists,up b93a6236-5e9c-4ed5-a3ad-1e267a4b7bef
osd.3 up   in  weight 1 up_from 3853 up_thru 3853 down_at 3851 last_clean_interval [3849,3850) 172.17.1.52:6803/1396176 10.10.10.3:6802/1396176 10.10.10.3:6803/1396176 172.17.1.52:6804/1396176 exists,up b9003899-49cf-41bf-bfb0-f32a2c3fa1ea
osd.4 up   in  weight 1 up_from 3841 up_thru 3841 down_at 3839 last_clean_interval [3823,3838) 172.17.1.52:6801/2381162 10.10.10.3:6804/2381162 10.10.10.3:6805/2381162 172.17.1.52:6805/2381162 exists,up 8729d950-d123-4a2f-aa30-2c2674a495cc
osd.5 up   in  weight 1 up_from 3845 up_thru 3845 down_at 3843 last_clean_interval [3831,3842) 172.17.1.52:6800/1404023 10.10.10.3:6800/1404023 10.10.10.3:6801/1404023 172.17.1.52:6802/1404023 exists,up e83ca331-3404-4501-b70c-89fe0f62c6d9
osd.6 up   in  weight 1 up_from 2525 up_thru 3853 down_at 2522 last_clean_interval [2473,2521) 172.17.1.54:6800/3668 10.10.10.5:6800/3668 10.10.10.5:6801/3668 172.17.1.54:6801/3668 exists,up 4138a4f0-f450-4f32-8f3c-3d31b068de6b
osd.7 up   in  weight 1 up_from 2605 up_thru 3853 down_at 2603 last_clean_interval [2529,2602) 172.17.1.54:6802/3592433 10.10.10.5:6802/3592433 10.10.10.5:6803/3592433 172.17.1.54:6803/3592433 exists,up eed5435a-a696-4e37-bb4f-31b6e871b486
osd.8 up   in  weight 1 up_from 2527 up_thru 3853 down_at 2522 last_clean_interval [2474,2521) 172.17.1.54:6804/3787 10.10.10.5:6804/3787 10.10.10.5:6805/3787 172.17.1.54:6805/3787 exists,up bc84595f-b8e6-4d9c-8e10-570c9550d041
osd.9 up   in  weight 1 up_from 3637 up_thru 3690 down_at 0 last_clean_interval [0,0) 172.17.1.50:6806/4178391 10.10.10.1:6806/4178391 10.10.10.1:6807/4178391 172.17.1.50:6807/4178391 exists,up 4b22a075-26e9-4eca-96f3-5e46ebcd61ed
osd.10 up   in  weight 1 up_from 3640 up_thru 3690 down_at 0 last_clean_interval [0,0) 172.17.1.51:6800/614354 10.10.10.2:6800/614354 10.10.10.2:6801/614354 172.17.1.51:6801/614354 exists,up 51fcc403-7e5f-4e53-a412-94c44d53e294
osd.11 up   in  weight 1 up_from 3643 up_thru 3690 down_at 0 last_clean_interval [0,0) 172.17.1.53:6800/3404422 10.10.10.4:6800/3404422 10.10.10.4:6801/3404422 172.17.1.53:6801/3404422 exists,up c645e67f-4613-4361-b33b-8fafef3d2f7a
osd.12 up   in  weight 1 up_from 3646 up_thru 3690 down_at 0 last_clean_interval [0,0) 172.17.1.54:6807/1396991 10.10.10.5:6806/1396991 10.10.10.5:6807/1396991 172.17.1.54:6808/1396991 exists,up dd554288-a855-453f-b146-1cabe0b79b9c
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!