Recent content by ravib123

  1. R

    ceph pool creation missing input validation, how do I fix it?

    Awesome, that worked despite the special characters and cleaned up the GUI. Thanks. Definitely worth putting on your bug list to put some validation on those input boxes for the forgetful folks like me :)
  2. R

    ceph pool creation missing input validation, how do I fix it?

    I was making some new ceph pools in the web gui and it doesn't perform validation on the inputs. As it turns out I used some characters that blew the scripts up. The entries seem to only exist in the web gui but I can't figure out how to remove them because the web gui wont remove them. It...
  3. R

    re: crushmap oops

    Little bit a ceph beginner here. I followed the directions from Sébastien Han and built out a ceph crushmap with HDD and SSD in the same box. There are 8 nodes each contributing an SSD and an HDD. I only noticed after putting some data on there I goofed and put a single HDD in the SSD group...
  4. R

    TASK ERROR: unable to parse directory volume name proxmox

    So I setup a new cluster and I am moving image files off via USB drive. One reason for this is I am using Ceph and it appears the best import method is actually to add the disk to a new vm conf and move the disk from local to ceph. I used a USB hard drive for the task, mounted it, and added a...
  5. R

    pveceph createosd makes "partitions" instead of "osd.x"

    So my next steps: remove node7, reformat, add back to cluster as node7a. dd the first 1G of the drives. pveceph zap the drives init the drives and make OSD from CLI or from GUI.... It took many fully removals of everything and attempts. Realistically I didnt do anything different, just a...
  6. R

    pveceph createosd makes "partitions" instead of "osd.x"

    So I created: /var/lib/ceph/osd/ceph-14 /var/lib/ceph/osd/ceph-15 Then re-ran the steps as described above. It failed to produce different results. Manual mounting of those did change the "partitions" markings under disks to osd.14 and osd.15 but apparently the script fails fully at that...
  7. R

    pveceph createosd makes "partitions" instead of "osd.x"

    I saw your previous post about that, these were fresh disks but I did the following: dd if=/dev/zero of=/dev/sda bs=1000000000 count=1 dd if=/dev/zero of=/dev/sdb bs=1000000000 count=1 then ceph-disk zap /dev/sda ceph-disk zap /dev/sdb then pveceph createosd /dev/sda pveceph createosd...
  8. R

    pveceph createosd makes "partitions" instead of "osd.x"

    Ok, Well what would cause the pveceph createosd to make something listed in the GUI as "partition" and not actually create the OSD past osd.14?
  9. R

    Is there a reason to limit the number of monitors? ( unable to find usable monitor id )

    easy enough, removed half the mons. Now if I can just get more than 14 OSDs to show up....
  10. R

    Is there a reason to limit the number of monitors? ( unable to find usable monitor id )

    The only reason I added more is because there seemed to be an issue semi-documented about adding OSDs to non-mon nodes. In such a case how would you have an even number of nodes?
  11. R

    pveceph createosd makes "partitions" instead of "osd.x"

    Just setting up a new 8 node cluster. Each node offers two OSDs. Looking at this what I am experiencing is that I seem to be capped at 14 OSDs for the whole cluster. I was curious if this is just a change to Ceph.pm because I found this line: pg_bits => { description =>...