Ceph OSD Formatting help

lastb0isct

Member
Dec 29, 2015
61
6
6
38
I'm having a difficult time using a disk in my machine. It is not being used for anything, i've successfully don a sgdisk -Z on the drive and it shows that nothing is on it.

Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

#pveceph createosd /dev/sda
command '/sbin/zpool list -HPLv' failed: open3: exec of /sbin/zpool list -HPLv failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 429.

device '/dev/sda' is in use
 
I am not a pveceph expert , however something looks wrong as pveceph is calling a zfs command [ /sbin/zpool ] .

could you post pveversion -v ?

and

fdisk -l /dev/sda
 
I am not a pveceph expert , however something looks wrong as pveceph is calling a zfs command [ /sbin/zpool ] .

could you post pveversion -v ?

and

fdisk -l /dev/sda

root@c:~# pveversion -v
proxmox-ve: 5.1-35 (running kernel: 4.13.13-4-pve)
pve-manager: 5.1-42 (running version: 5.1-42/724a6cb3)
pve-kernel-4.13.13-4-pve: 4.13.13-35
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-19
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: not correctly installed
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
ceph: 12.2.2-pve1

root@c:~# fdisk -l /dev/sda
Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
I'm having a difficult time using a disk in my machine. It is not being used for anything, i've successfully don a sgdisk -Z on the drive and it shows that nothing is on it.

Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

#pveceph createosd /dev/sda
command '/sbin/zpool list -HPLv' failed: open3: exec of /sbin/zpool list -HPLv failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 429.

device '/dev/sda' is in use
Hi,
I think you should change your bios disk ordering to get your boot-disk as first disk?!

what is the output of
Code:
zpool status

parted -l
Udo
 
Hi,
I think you should change your bios disk ordering to get your boot-disk as first disk?!

what is the output of
Code:
zpool status

parted -l
Udo

Looks like upon trying to change the partitions and such i blew out my installs =/ woops. I'll post here when I get them rebuilt.

Essentially what I'm trying to do is use macmini's that are in a case which has pci-e expanders and usb extensions. I have a 10GbE card in each machine and have them directly connected to each other via broadcast network (mesh).

The drives i'm using in each are
120gb SSD via sata --> USB for OS/Boot.
240gb SSD via onboard connector for data

The reason i'm using sata--> USB is because I plan to put a 1TB SSD in addition to the above in each machine via the 2nd onboard connector for data.

I know this is an extremely underpowered and non-optimal setup but want to test as this is just a lab environment. I would like to move these to become production if it works well...but who knows.

I'll update this thread when I get them rebuilt! Thanks!
 
that 'c:' part of the prompt is what is odd. looks like dos drive naming .

sorry if this is already answered: what is pve installed on?
the "c" is the hostname of the node. I have 3 nodes:
a.mini
b.mini
c.mini

Only shows the first letter at the command prompt

Installed on Mac Minis
 
Here is the commands that I ran so far. I've gotten one re-installed. I will attempt the others now and the install ceph. I believe that the zpool command can't be found yet because I haven't installed ceph.

Code:
root@b:~# zpool status
bash: zpool: command not found
root@b:~# parted -l
Model: ATA APPLE SSD TS256C (scsi)
Disk /dev/sda: 251GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name     Flags
 1      1049kB  200GB  200GB               primary


Model: SABRENT  (scsi)
Disk /dev/sdb: 120GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name  Flags
 1      33.6MB  537MB  503MB  fat32              boot, esp
 2      537MB   805MB  268MB  ext2
 3      805MB   120GB  119GB                     lvm


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/b--vg-root: 111GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End    Size   File system  Flags
 1      0.00B  111GB  111GB  ext4


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/b--vg-swap_1: 8502MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  8502MB  8502MB  linux-swap(v1)
 
So, on my A & C machines I was able to createosd just fine:

Code:
root@a:~# fdisk -l
Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 3A8CB437-6F3E-4E4E-A834-2A66840AA0D6

Device       Start       End   Sectors   Size Type
/dev/sdb1     2048   1050623   1048576   512M EFI System
/dev/sdb2  1050624   1550335    499712   244M Linux filesystem
/dev/sdb3  1550336 234440703 232890368 111.1G Linux LVM


Disk /dev/mapper/a--vg-swap_1: 15.9 GiB, 17083400192 bytes, 33366016 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/a--vg-root: 95.1 GiB, 102152273920 bytes, 199516160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@a:~# pveceph createosd /dev/sda
command '/sbin/zpool list -HPLv' failed: open3: exec of /sbin/zpool list -HPLv failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 429.

create OSD on /dev/sda (bluestore)
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
The operation has completed successfully.
meta-data=/dev/sda1              isize=2048   agcount=4, agsize=6400 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
data     =                       bsize=4096   blocks=25600, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.
root@a:~#

On the B machine i'm still getting that error that it's in use.

Code:
root@b:~# parted -l
Model: ATA APPLE SSD TS256C (scsi)
Disk /dev/sda: 251GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name     Flags
 1      1049kB  200GB  200GB               primary


Model: SABRENT  (scsi)
Disk /dev/sdb: 120GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name  Flags
 1      33.6MB  537MB  503MB  fat32              boot, esp
 2      537MB   805MB  268MB  ext2
 3      805MB   120GB  119GB                     lvm


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/b--vg-root: 111GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End    Size   File system  Flags
 1      0.00B  111GB  111GB  ext4


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/b--vg-swap_1: 8502MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  8502MB  8502MB  linux-swap(v1)


root@b:~# pveceph createosd /dev/sda
command '/sbin/zpool list -HPLv' failed: open3: exec of /sbin/zpool list -HPLv failed: No such file or directory at /usr/share/perl5/PVE/Tools.pm line 429.

device '/dev/sda' is in use
root@b:~# fdisk -l
Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 70BDE588-B106-467F-B9A7-1AF153E6E753

Device     Start       End   Sectors   Size Type
/dev/sda1   2048 390625279 390623232 186.3G Linux filesystem


Disk /dev/sdb: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: 8CF28891-18B8-41E7-B700-EA79BAACA51F

Device       Start       End   Sectors  Size Type
/dev/sdb1    65535   1048559    983025  480M EFI System
/dev/sdb2  1048560   1572839    524280  256M Linux filesystem
/dev/sdb3  1572840 234418694 232845855  111G Linux LVM

Partition 1 does not start on physical sector boundary.


Disk /dev/mapper/b--vg-swap_1: 7.9 GiB, 8501854208 bytes, 16605184 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/b--vg-root: 103.1 GiB, 110679293952 bytes, 216170496 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Any ideas?!
 
I got it working completely...just need help tweaking the write performance. Hope it might just be my settings

rK5VyTt.png


I'm currently getting these performance numbers:

Code:
root@b:~# rados -p ceph bench 60 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objects
Object prefix: benchmark_data_b_26978
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        32        16   63.9961        64    0.742284    0.741389
    2      16        48        32   63.9913        64    0.744329    0.738303
    3      16        80        64   85.3206       128    0.732874    0.732742
    4      16        96        80   79.9876        64    0.739055     0.73361
    5      16       112        96   76.7869        64    0.710905    0.730787
    6      16       144       128   85.3186       128    0.709124     0.72821
    7      16       160       144   82.2713        64    0.715961    0.726585
    8      16       176       160   79.9857        64    0.736764    0.727324
    9      16       208       192   85.3183       128    0.710421    0.727327
   10      16       224       208   83.1852        64    0.721571    0.726753
   11      16       240       224   81.4399        64      1.1033    0.753151
   12      16       256       240   79.9856        64    0.766299    0.753224
   13      16       288       272   83.6774       128    0.746685    0.750316
   14      16       304       288    82.271        64    0.708056    0.748651
   15      16       336       320   85.3182       128    0.738596    0.746393
   16      16       352       336   83.9851        64    0.729581    0.745565
   17      16       368       352   82.8086        64     0.72453    0.744487
   18      16       400       384   85.3177       128    0.867381    0.749343
   19      16       416       400   84.1952        64    0.749326    0.749293
2018-01-19 15:53:20.708550 min lat: 0.703258 max lat: 1.1049 avg lat: 0.749212
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
   20      16       432       416   83.1849        64    0.746651    0.749212
   21      16       464       448   85.3178       128     0.71229    0.747226
   22      16       480       464   84.3484        64    0.739092    0.746627
   23      16       496       480   83.4632        64    0.748121     0.74651
   24      16       528       512   85.3178       128    0.722808    0.744759
   25      16       544       528   84.4647        64    0.734928    0.744589
   26      16       560       544   83.6773        64    0.747629    0.744323
   27      16       592       576   85.3183       128    0.715299     0.74294
   28      16       608       592   84.5565        64    0.745908    0.742835
   29      16       640       624   86.0537       128    0.722302    0.741677
   30      16       656       640   85.3185        64    0.854054    0.744586
   31      16       672       656   84.6305        64    0.720943     0.74382
   32      16       704       688   85.9851       128    0.718778    0.742854
   33      16       720       704   85.3186        64    0.711601    0.742019
   34      16       736       720   84.6911        64    0.713723    0.741602
   35      16       768       752   85.9279       128     0.70926    0.740532
   36      16       784       768   85.3186        64    0.703707    0.739824
   37      16       800       784   84.7421        64    0.740217    0.739762
   38      16       832       816   85.8799       128    0.735073    0.742288
   39      16       848       832   85.3183        64    0.729904    0.741968
2018-01-19 15:53:40.711955 min lat: 0.696736 max lat: 1.1049 avg lat: 0.741717
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
   40      16       864       848   84.7851        64    0.732268    0.741717
   41      16       896       880   85.8385       128    0.880577    0.743501
   42      16       912       896   85.3183        64     0.72953    0.742986
   43      16       928       912   84.8222        64    0.753257    0.743072
   44      16       960       944    85.803       128    0.742051    0.742632
   45      16       976       960   85.3183        64    0.740055    0.742515
   46      16       992       976   84.8546        64    0.779094    0.743074
   47      16      1024      1008   85.7721       128    0.712664    0.741781
   48      16      1040      1024   85.3183        64    0.745199    0.741766
   49      16      1072      1056   86.1889       128    0.744478    0.741593
   50      16      1088      1072   85.7446        64    0.727967    0.741474
   51      16      1104      1088   85.3181        64    0.724968    0.741211
   52      16      1120      1104   84.9079        64     0.90392    0.743401
   53      16      1152      1136   85.7206       128    0.751372    0.743221
   54      16      1168      1152   85.3182        64     0.71858    0.742909
   55      16      1200      1184   86.0939       128    0.721085    0.742282
   56      16      1216      1200   85.6992        64    0.738273    0.742133
   57      16      1232      1216   85.3183        64    0.739722    0.742057
   58      16      1264      1248   86.0538       128    0.734007    0.741703
   59      16      1280      1264   85.6798        64    0.721673    0.741487
2018-01-19 15:54:00.715493 min lat: 0.687348 max lat: 1.1049 avg lat: 0.741079
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
   60      16      1297      1281   85.3849        68    0.727328    0.741079
Total time run:         60.246708
Total writes made:      1297
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     86.1126
Stddev Bandwidth:       30.3813
Max bandwidth (MB/sec): 128
Min bandwidth (MB/sec): 64
Average IOPS:           21
Stddev IOPS:            7
Max IOPS:               32
Min IOPS:               16
Average Latency(s):     0.742483
Stddev Latency(s):      0.0600846
Max latency(s):         1.1049
Min latency(s):         0.236145
root@b:~# rados -p ceph bench 60 seq
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16       179       163   651.862       652    0.093421   0.0818938
    2      16       358       342   683.876       716    0.066012   0.0911231
    3      16       521       505   673.226       652   0.0274761    0.092056
    4      16       681       665   664.902       640   0.0229427   0.0923282
    5      16       842       826   660.708       644   0.0248527    0.094545
    6      16       973       957   637.916       524    0.231265     0.09789
    7      16      1109      1093   624.492       544   0.0225874    0.100473
    8      16      1268      1252   625.923       636    0.210718    0.100501
Total time run:       8.407530
Total reads made:     1297
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   617.066
Average IOPS:         154
Stddev IOPS:          15
Max IOPS:             179
Min IOPS:             131
Average Latency(s):   0.102411
Max latency(s):       0.81993
Min latency(s):       0.0209651

Anything look suspicious in my settings? Would those writes be managable for ~15 LXCs? Only a few of which are under heavy utilization?
 
Here is the commands that I ran so far. I've gotten one re-installed. I will attempt the others now and the install ceph. I believe that the zpool command can't be found yet because I haven't installed ceph.

Code:
root@b:~# zpool status
bash: zpool: command not found
root@b:~# parted -l
Model: ATA APPLE SSD TS256C (scsi)
Disk /dev/sda: 251GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name     Flags
1      1049kB  200GB  200GB               primary


Model: SABRENT  (scsi)
Disk /dev/sdb: 120GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name  Flags
1      33.6MB  537MB  503MB  fat32              boot, esp
2      537MB   805MB  268MB  ext2
3      805MB   120GB  119GB                     lvm


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/b--vg-root: 111GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End    Size   File system  Flags
1      0.00B  111GB  111GB  ext4


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/b--vg-swap_1: 8502MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
1      0.00B  8502MB  8502MB  linux-swap(v1)


Just wondering how are those Sabrent Disks working out for you? Are they PCIE or SATA? are they in a Mac? I have a macbook air with a 512Gb which I am thinking of upgrading to the 1TB and the Sabrent prices and features look pretty good.
According to their rated TBW, they might also be a good Server drive to use inside an Intel NUC since it's 2280.
I'd appreciate any feedback or experience you may have.
Thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!