gurubert's latest activity

  • gurubert
    gurubert replied to the thread Migrate to a new cluster.
    Yes, this is sufficient.
  • gurubert
    gurubert replied to the thread Migrate to a new cluster.
    AFAIK this is safe. Best would be to remove the VM/CT config file from /etc/pve of the old cluster. You may encounter some issues with the virtual hardware version on the new cluster. VMs (especially Windows) may be picky.
  • gurubert
    gurubert replied to the thread Ceph Warnung nach Update.
    Ich würde als erstes den MDS mds.pox einmal neu starten.
  • gurubert
    gurubert reacted to alexskysilk's post in the thread Ceph - power outage and recovery with Like Like.
    at this point it may be worthwhile to see how your network is set up. Do you want to post the content of your /etc/network/interfaces for your nodes, and describe how they are physically interconnected?
  • gurubert
    gurubert reacted to alexskysilk's post in the thread Ceph - power outage and recovery with Like Like.
    looking at your layout... you are BRAVE. I wouldnt go to production with such a lopsided deployment, and without any room to self heal. brave is a... diplomatic word.
  • gurubert
    gurubert reacted to alexskysilk's post in the thread Ceph - power outage and recovery with Like Like.
    Fix that problem first. Why are you running out of memory?
  • gurubert
    This is not unusual in such a small cluster with such a low number of PGs. The CRUSH algorithm just doe snot have enough pieces to distribute the data evenly. You should increase the number of PGs so that you have at least 100 per OSD.
  • gurubert
    gurubert replied to the thread osd crashed.
    I would replace the disk now.
  • gurubert
    gurubert replied to the thread osd crashed.
    Remove this OSD and redeploy it. There may be just a bit error on the disk.
  • gurubert
    gurubert reacted to wassupluke's post in the thread OSD struggles with Like Like.
    Ditched the drive, subbed in a couple smaller HDDs to scrape by until I get a replacement drive, and everything eventually balanced back out beautifully. Pools now use only one type of drive instead of a mix. Thank you.
  • gurubert
    gurubert reacted to Ernst T.'s post in the thread IP Adresse with Like Like.
    Der ssh Zugriff für root ist bei den meisten Distributionen standardmäßig deaktiviert! Kann man aber natürlich aktivieren. Welches Template verwendet du?
  • gurubert
    gurubert reacted to news's post in the thread IP Adresse with Like Like.
    nun wir sehen alle nun über unsere Glaskugel deinen Rechner vor uns, können uns die Netzwerkkonfigurationen anschauen und uns ein Bild über deine dynamischen Setups machen. Besser kann man so einen Proxmox VE Setup nicht dokumentieren. :cool:
  • gurubert
    gurubert reacted to ThoSo's post in the thread IP Adresse with Like Like.
    Hallo, wo bekomme ich einen Kaffee?
  • gurubert
    gurubert reacted to Straightman's post in the thread Ceph placement group remapping with Like Like.
    Thanks for taking the time to review my questions and providing the additional clarity, I will go back to the drawing board, learn some more and rethink the approach.
  • gurubert
    gurubert replied to the thread Ceph placement group remapping.
    Erasure coding is not usable in such small clusters. You need at least 10 nodes with enough OSDs to do anything meaningful with erasure coding.
  • gurubert
    gurubert reacted to alexskysilk's post in the thread Ceph placement group remapping with Like Like.
    your config is unworkable. While you didnt provide your actual crush rules, I can already see they can never be satisfied. Consider: you have 3 nodes. node pve2 15.25TB HDD, 1.83TB SSD node pve3 7.27TB HDD, 0.7TB SSD node pve4 0.5 HDD, 0.9 SSD...
  • gurubert
    gurubert replied to the thread OSD struggles.
    Yes, do not mix two different device classes in one pool. You will only get HDD performance.
  • gurubert
    gurubert replied to the thread OSD struggles.
    You need to replace the sdb drive.
  • gurubert
    gurubert replied to the thread OSD struggles.
    Are there any signs in the kernel log about a failure on the device of this OSD?
  • gurubert
    gurubert replied to the thread Ceph 2 OSD's down and out.
    Is data affected? Are there any PGs not active+clean?