CEPH unable to create VM - () TASK ERROR: unable to create VM 100 - rbd error: 'storage-CEPH-Pool-1'-locked command timed out - aborting

unsichtbarre

New Member
Oct 1, 2024
17
3
3
NOOB here, any help appreciated.

I don't seem to be able to get past this:

Code:
root@pve101:/etc/apt/sources.list.d# ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP    META     AVAIL    %USE  VAR   PGS  STATUS
 0    ssd  1.09160   1.00000  1.1 TiB   27 MiB  784 KiB   1 KiB   26 MiB  1.1 TiB  0.00  2.92    3      up
 1    ssd  1.09160   1.00000  1.1 TiB   27 MiB  784 KiB   1 KiB   26 MiB  1.1 TiB  0.00  2.92    4      up
 2    ssd  3.63869   1.00000  3.6 TiB   27 MiB  784 KiB   1 KiB   26 MiB  3.6 TiB     0  0.88   10      up
 3    ssd  3.63869   1.00000  3.6 TiB   27 MiB  784 KiB   1 KiB   26 MiB  3.6 TiB     0  0.88   11      up
 4    ssd  3.63869   1.00000  3.6 TiB   27 MiB  784 KiB   1 KiB   26 MiB  3.6 TiB     0  0.88    9      up
 5    ssd  3.63869   1.00000  3.6 TiB   27 MiB  784 KiB   1 KiB   26 MiB  3.6 TiB     0  0.88   10      up
 6    ssd  3.63869   1.00000  3.6 TiB   28 MiB  1.6 MiB   1 KiB   26 MiB  3.6 TiB     0  0.90   17      up
 7    ssd  3.63869   1.00000  3.6 TiB   28 MiB  1.6 MiB   1 KiB   26 MiB  3.6 TiB     0  0.90   16      up
 8    ssd  3.63869   1.00000  3.6 TiB   28 MiB  1.6 MiB   1 KiB   26 MiB  3.6 TiB     0  0.90   19      up
 9    ssd  3.63869   1.00000  3.6 TiB   27 MiB  784 KiB   1 KiB   26 MiB  3.6 TiB     0  0.88    8      up
10    ssd  3.63869   1.00000  3.6 TiB   27 MiB  784 KiB   1 KiB   26 MiB  3.6 TiB     0  0.88   14      up
11    ssd  3.63869   1.00000  3.6 TiB   27 MiB  784 KiB   1 KiB   26 MiB  3.6 TiB     0  0.88   10      up
                       TOTAL   39 TiB  328 MiB   12 MiB  19 KiB  316 MiB   39 TiB     0
MIN/MAX VAR: 0.88/2.92  STDDEV: 0
root@pve101:/etc/apt/sources.list.d# ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         38.57007  root default
-3         38.57007      host pve101
 0    ssd   1.09160          osd.0        up   1.00000  1.00000
 1    ssd   1.09160          osd.1        up   1.00000  1.00000
 2    ssd   3.63869          osd.2        up   1.00000  1.00000
 3    ssd   3.63869          osd.3        up   1.00000  1.00000
 4    ssd   3.63869          osd.4        up   1.00000  1.00000
 5    ssd   3.63869          osd.5        up   1.00000  1.00000
 6    ssd   3.63869          osd.6        up   1.00000  1.00000
 7    ssd   3.63869          osd.7        up   1.00000  1.00000
 8    ssd   3.63869          osd.8        up   1.00000  1.00000
 9    ssd   3.63869          osd.9        up   1.00000  1.00000
10    ssd   3.63869          osd.10       up   1.00000  1.00000
11    ssd   3.63869          osd.11       up   1.00000  1.00000
root@pve101:/etc/apt/sources.list.d# ceph -s
  cluster:
    id:     1101c540-2741-48d9-b64d-189700d0b84f
    health: HEALTH_WARN
            Reduced data availability: 256 pgs inactive
            Degraded data redundancy: 256 pgs undersized

  services:
    mon: 3 daemons, quorum pve101,pve102,pve103 (age 38m)
    mgr: pve101(active, since 58m), standbys: pve102, pve103
    osd: 12 osds: 12 up (since 31m), 12 in (since 31m); 1 remapped pgs

  data:
    pools:   2 pools, 257 pgs
    objects: 2 objects, 833 KiB
    usage:   329 MiB used, 39 TiB / 39 TiB avail
    pgs:     99.611% pgs not active
             4/6 objects misplaced (66.667%)
             256 undersized+peered
             1   active+clean+remapped

  progress:
    Global Recovery Event (0s)
      [............................]

root@pve101:/etc/apt/sources.list.d#

Code:
HEALTH_WARN: Reduced data availability: 256 pgs inactive
pg 2.cd is stuck inactive for 33m, current state undersized+peered, last acting [2]
pg 2.ce is stuck inactive for 33m, current state undersized+peered, last acting [10]
pg 2.cf is stuck inactive for 33m, current state undersized+peered, last acting [1]
pg 2.d0 is stuck inactive for 33m, current state undersized+peered, last acting [8]
pg 2.d1 is stuck inactive for 33m, current state undersized+peered, last acting [5]
pg 2.d2 is stuck inactive for 33m, current state undersized+peered, last acting [6]
pg 2.d3 is stuck inactive for 33m, current state undersized+peered, last acting [9]
pg 2.d4 is stuck inactive for 33m, current state undersized+peered, last acting [6]
pg 2.d5 is stuck inactive for 33m, current state undersized+peered, last acting [3]
pg 2.d6 is stuck inactive for 33m, current state undersized+peered, last acting [6]
pg 2.d7 is stuck inactive for 33m, current state undersized+peered, last acting [4]
pg 2.d8 is stuck inactive for 33m, current state undersized+peered, last acting [3]
pg 2.d9 is stuck inactive for 33m, current state undersized+peered, last acting [6]
pg 2.da is stuck inactive for 33m, current state undersized+peered, last acting [11]
pg 2.db is stuck inactive for 33m, current state undersized+peered, last acting [7]
pg 2.dc is stuck inactive for 33m, current state undersized+peered, last acting [10]
pg 2.dd is stuck inactive for 33m, current state undersized+peered, last acting [9]
pg 2.de is stuck inactive for 33m, current state undersized+peered, last acting [2]
pg 2.df is stuck inactive for 33m, current state undersized+peered, last acting [2]
pg 2.e0 is stuck inactive for 33m, current state undersized+peered, last acting [6]
pg 2.e1 is stuck inactive for 33m, current state undersized+peered, last acting [8]
pg 2.e2 is stuck inactive for 33m, current state undersized+peered, last acting [4]
pg 2.e3 is stuck inactive for 33m, current state undersized+peered, last acting [10]
pg 2.e4 is stuck inactive for 33m, current state undersized+peered, last acting [7]
pg 2.e5 is stuck inactive for 33m, current state undersized+peered, last acting [8]
pg 2.e6 is stuck inactive for 33m, current state undersized+peered, last acting [3]
pg 2.e7 is stuck inactive for 33m, current state undersized+peered, last acting [10]
pg 2.e8 is stuck inactive for 33m, current state undersized+peered, last acting [3]
pg 2.e9 is stuck inactive for 33m, current state undersized+peered, last acting [11]
pg 2.ea is stuck inactive for 33m, current state undersized+peered, last acting [6]
pg 2.eb is stuck inactive for 33m, current state undersized+peered, last acting [3]
pg 2.ec is stuck inactive for 33m, current state undersized+peered, last acting [7]
pg 2.ed is stuck inactive for 33m, current state undersized+peered, last acting [10]
pg 2.ee is stuck inactive for 33m, current state undersized+peered, last acting [2]
pg 2.ef is stuck inactive for 33m, current state undersized+peered, last acting [6]
pg 2.f0 is stuck inactive for 33m, current state undersized+peered, last acting [8]
pg 2.f1 is stuck inactive for 33m, current state undersized+peered, last acting [3]
pg 2.f2 is stuck inactive for 33m, current state undersized+peered, last acting [3]
pg 2.f3 is stuck inactive for 33m, current state undersized+peered, last acting [10]
pg 2.f4 is stuck inactive for 33m, current state undersized+peered, last acting [7]
pg 2.f5 is stuck inactive for 33m, current state undersized+peered, last acting [7]
pg 2.f6 is stuck inactive for 33m, current state undersized+peered, last acting [2]
pg 2.f7 is stuck inactive for 33m, current state undersized+peered, last acting [2]
pg 2.f8 is stuck inactive for 33m, current state undersized+peered, last acting [10]
pg 2.f9 is stuck inactive for 33m, current state undersized+peered, last acting [7]
pg 2.fa is stuck inactive for 33m, current state undersized+peered, last acting [5]
pg 2.fb is stuck inactive for 33m, current state undersized+peered, last acting [7]
pg 2.fc is stuck inactive for 33m, current state undersized+peered, last acting [11]
pg 2.fd is stuck inactive for 33m, current state undersized+peered, last acting [0]
pg 2.fe is stuck inactive for 33m, current state undersized+peered, last acting [0]
pg 2.ff is stuck inactive for 33m, current state undersized+peered, last acting [2]

Code:
HEALTH_WARN: Degraded data redundancy: 256 pgs undersized
pg 2.cd is stuck undersized for 9m, current state undersized+peered, last acting [2]
pg 2.ce is stuck undersized for 9m, current state undersized+peered, last acting [10]
pg 2.cf is stuck undersized for 9m, current state undersized+peered, last acting [1]
pg 2.d0 is stuck undersized for 9m, current state undersized+peered, last acting [8]
pg 2.d1 is stuck undersized for 9m, current state undersized+peered, last acting [5]
pg 2.d2 is stuck undersized for 9m, current state undersized+peered, last acting [6]
pg 2.d3 is stuck undersized for 9m, current state undersized+peered, last acting [9]
pg 2.d4 is stuck undersized for 9m, current state undersized+peered, last acting [6]
pg 2.d5 is stuck undersized for 9m, current state undersized+peered, last acting [3]
pg 2.d6 is stuck undersized for 9m, current state undersized+peered, last acting [6]
pg 2.d7 is stuck undersized for 9m, current state undersized+peered, last acting [4]
pg 2.d8 is stuck undersized for 9m, current state undersized+peered, last acting [3]
pg 2.d9 is stuck undersized for 9m, current state undersized+peered, last acting [6]
pg 2.da is stuck undersized for 9m, current state undersized+peered, last acting [11]
pg 2.db is stuck undersized for 9m, current state undersized+peered, last acting [7]
pg 2.dc is stuck undersized for 9m, current state undersized+peered, last acting [10]
pg 2.dd is stuck undersized for 9m, current state undersized+peered, last acting [9]
pg 2.de is stuck undersized for 9m, current state undersized+peered, last acting [2]
pg 2.df is stuck undersized for 9m, current state undersized+peered, last acting [2]
pg 2.e0 is stuck undersized for 9m, current state undersized+peered, last acting [6]
pg 2.e1 is stuck undersized for 9m, current state undersized+peered, last acting [8]
pg 2.e2 is stuck undersized for 9m, current state undersized+peered, last acting [4]
pg 2.e3 is stuck undersized for 9m, current state undersized+peered, last acting [10]
pg 2.e4 is stuck undersized for 9m, current state undersized+peered, last acting [7]
pg 2.e5 is stuck undersized for 9m, current state undersized+peered, last acting [8]
pg 2.e6 is stuck undersized for 9m, current state undersized+peered, last acting [3]
pg 2.e7 is stuck undersized for 9m, current state undersized+peered, last acting [10]
pg 2.e8 is stuck undersized for 9m, current state undersized+peered, last acting [3]
pg 2.e9 is stuck undersized for 9m, current state undersized+peered, last acting [11]
pg 2.ea is stuck undersized for 9m, current state undersized+peered, last acting [6]
pg 2.eb is stuck undersized for 9m, current state undersized+peered, last acting [3]
pg 2.ec is stuck undersized for 9m, current state undersized+peered, last acting [7]
pg 2.ed is stuck undersized for 9m, current state undersized+peered, last acting [10]
pg 2.ee is stuck undersized for 9m, current state undersized+peered, last acting [2]
pg 2.ef is stuck undersized for 9m, current state undersized+peered, last acting [6]
pg 2.f0 is stuck undersized for 9m, current state undersized+peered, last acting [8]
pg 2.f1 is stuck undersized for 9m, current state undersized+peered, last acting [3]
pg 2.f2 is stuck undersized for 9m, current state undersized+peered, last acting [3]
pg 2.f3 is stuck undersized for 9m, current state undersized+peered, last acting [10]
pg 2.f4 is stuck undersized for 9m, current state undersized+peered, last acting [7]
pg 2.f5 is stuck undersized for 9m, current state undersized+peered, last acting [7]
pg 2.f6 is stuck undersized for 9m, current state undersized+peered, last acting [2]
pg 2.f7 is stuck undersized for 9m, current state undersized+peered, last acting [2]
pg 2.f8 is stuck undersized for 9m, current state undersized+peered, last acting [10]
pg 2.f9 is stuck undersized for 9m, current state undersized+peered, last acting [7]
pg 2.fa is stuck undersized for 9m, current state undersized+peered, last acting [5]
pg 2.fb is stuck undersized for 9m, current state undersized+peered, last acting [7]
pg 2.fc is stuck undersized for 9m, current state undersized+peered, last acting [11]
pg 2.fd is stuck undersized for 9m, current state undersized+peered, last acting [0]
pg 2.fe is stuck undersized for 9m, current state undersized+peered, last acting [0]
pg 2.ff is stuck undersized for 9m, current state undersized+peered, last acting [2]
 
Your placement groups are not active, meaning no data transfer (read or write) can take place.

You seem to have only one host (pve01) with OSDs. With the default replication size of 3 and the default failure zone "host" Ceph is unable to place the second and third copy. You need to add at least two more hosts to the Ceph cluster with an appropriate amount of OSDs.
 
  • Like
Reactions: unsichtbarre
Your placement groups are not active, meaning no data transfer (read or write) can take place.

You seem to have only one host (pve01) with OSDs. With the default replication size of 3 and the default failure zone "host" Ceph is unable to place the second and third copy. You need to add at least two more hosts to the Ceph cluster with an appropriate amount of OSDs.
Thank you for your reply @gurubert ! Can you tell me what part of the data clued you in to that fact (trying to learn/understand)?

-JB
 
Self answered.
Code:
root@pve101:~# ceph osd tree
ID  CLASS  WEIGHT     TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         115.71021  root default
-3          38.57007      host pve101
 0    ssd    1.09160          osd.0        up   1.00000  1.00000
 1    ssd    1.09160          osd.1        up   1.00000  1.00000
 2    ssd    3.63869          osd.2        up   1.00000  1.00000
 3    ssd    3.63869          osd.3        up   1.00000  1.00000
 4    ssd    3.63869          osd.4        up   1.00000  1.00000
 5    ssd    3.63869          osd.5        up   1.00000  1.00000
 6    ssd    3.63869          osd.6        up   1.00000  1.00000
 7    ssd    3.63869          osd.7        up   1.00000  1.00000
 8    ssd    3.63869          osd.8        up   1.00000  1.00000
 9    ssd    3.63869          osd.9        up   1.00000  1.00000
10    ssd    3.63869          osd.10       up   1.00000  1.00000
11    ssd    3.63869          osd.11       up   1.00000  1.00000
-5          38.57007      host pve102
12    ssd    1.09160          osd.12       up   1.00000  1.00000
13    ssd    1.09160          osd.13       up   1.00000  1.00000
14    ssd    3.63869          osd.14       up   1.00000  1.00000
15    ssd    3.63869          osd.15       up   1.00000  1.00000
16    ssd    3.63869          osd.16       up   1.00000  1.00000
17    ssd    3.63869          osd.17       up   1.00000  1.00000
18    ssd    3.63869          osd.18       up   1.00000  1.00000
19    ssd    3.63869          osd.19       up   1.00000  1.00000
20    ssd    3.63869          osd.20       up   1.00000  1.00000
21    ssd    3.63869          osd.21       up   1.00000  1.00000
22    ssd    3.63869          osd.22       up   1.00000  1.00000
23    ssd    3.63869          osd.23       up   1.00000  1.00000
-7          38.57007      host pve103
24    ssd    1.09160          osd.24       up   1.00000  1.00000
25    ssd    1.09160          osd.25       up   1.00000  1.00000
26    ssd    3.63869          osd.26       up   1.00000  1.00000
27    ssd    3.63869          osd.27       up   1.00000  1.00000
28    ssd    3.63869          osd.28       up   1.00000  1.00000
29    ssd    3.63869          osd.29       up   1.00000  1.00000
30    ssd    3.63869          osd.30       up   1.00000  1.00000
31    ssd    3.63869          osd.31       up   1.00000  1.00000
32    ssd    3.63869          osd.32       up   1.00000  1.00000
33    ssd    3.63869          osd.33       up   1.00000  1.00000
34    ssd    3.63869          osd.34       up   1.00000  1.00000
35    ssd    3.63869          osd.35       up   1.00000  1.00000
root@pve101:~#
 
Yes, I added, just like I should have in the beginning! I mis-interpreted results and thought I was adding all /dev/sda (etc.) instead of per-host.
Code:
root@pve101:~# ceph -s
  cluster:
    id:     1101c540-2741-48d9-b64d-189700d0b84f
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum pve101,pve102,pve103 (age 38s)
    mgr: pve103(active, since 14h), standbys: pve102, pve101
    osd: 36 osds: 36 up (since 36m), 36 in (since 36m)

  data:
    pools:   2 pools, 33 pgs
    objects: 2.56k objects, 9.9 GiB
    usage:   33 GiB used, 116 TiB / 116 TiB avail
    pgs:     33 active+clean

  io:
    client:   11 KiB/s wr, 0 op/s rd, 1 op/s wr

root@pve101:~#
THX,
-JB
 
  • Like
Reactions: gurubert

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!