Search results

  1. I

    Updating from 6 to 7 possible issue

    Ok - thx for your reply - will ignore it then...
  2. I

    Updating from 6 to 7 possible issue

    Hi Fabian, no pveupgrade entry in current history or syslog - but the original cluster setup was 5.2 or something and it was updated since that time. Should this be cleaned up? Are there any remcommandations?
  3. I

    Updating from 6 to 7 possible issue

    Hi Fabian, I used the std porcedure, basically following the WIKI document: apt update; apt dist-upgrade. Cleaning up my upd documentation, I'am quite sure that it was the same in all 3 cases, not just once as I wrote before.
  4. I

    Updating from 6 to 7 possible issue

    Yes, currently updating 3 nodes, I faced exactly the same szenario once - but as you said - this did not seem to cause any issues.
  5. I

    pve udate6to7 issue: changing MACs on bond and slaves causes ceph-cluster to degrade

    Hello Forum Below I describe severe degrading issues of our 3-node-hyperconverged-meshed ceph-cluster (based on 15.2.13), which happened after the update6to7 and how we were able to resolve it. After updating and rebooting, the slave-nics (ens2 and ens3) of our meshed bond0 showed newly assigned...
  6. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Ok, I'll doublecheck before I continue... Thank you very much for your substantially support - very helpful :)
  7. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Ok - after reboot which did not change anything - I checked the network-connections and now it is obvious: the updated node has lost connection to the other nodes in the ceph-meshed-network (public-network), because the bond is down (while the nics are up): 9: bond0...
  8. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Thanks for the fast reply! No, have to reboot since we are using classical network/interfaces config Will do so now and report...
  9. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Ok, it makes sense, because ceph manager on the updated node does not show an ip - since it is a client in the meshed-ceph-network it should have 192.168.228.11 like the other node (.12 and. .13) and the network tab reports the following pending changes: --- /etc/network/interfaces...
  10. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Nice, wasn't aware of the code tag - thx! root@amcvh12:~# systemctl status ceph-mgr.target ● ceph-mgr.target - ceph target allowing to start/stop all ceph-mgr@.ser Loaded: loaded (/lib/systemd/system/ceph-mgr.target; enabled; vendor Active: active since Thu 2021-07-22 16:02:21 CEST; 5...
  11. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Done - but it did not change anything - ceph on this updated node is somehow stuck - the ceph -s command on this node does not finish - cursor is blinking the output from second node does not show any diffrence after executing the restart root@amcvh12:~# ceph -s cluster: id...
  12. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Yes : root@amcvh11:~# systemctl start ceph-osd@14.service root@amcvh11:~# systemctl status ceph-osd@14.service ● ceph-osd@14.service - Ceph object storage daemon osd.14 Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtim> Drop-In...
  13. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Hello ph0x - thanks for your support! Output is: # systemctl status ceph-osd@14.service ● ceph-osd@14.service - Ceph object storage daemon osd.14 Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtim> Drop-In: /lib/systemd/system/ceph-osd@.service.d...
  14. I

    pve6to7 update repository confusion caused ceph to go down on updated node

    Hi Forum, we run a hyperconverged 3 node pve-cluster still in eval mode (latest 6.4 pve and ceph 15.2.13) I started an pve6to7-update on the first node of a healthy ceph-cluster with a faulty repository configuration, which I can't exactly recall anymore. In addition I set the noout flag for...
  15. I

    New to pve, trying to delete ceph pool

    This depends on your configuration: Please read the following document: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster To better understand the concepts, you may refer to: Mastering Proxmox - Third Edition: Build virtualized environments using the Proxmox VE hypervisor, 2017...
  16. I

    various ceph-octopus issues

    Hi Alvin I do, because it caused a lot of trouble (in total about 100 GByte of syslog entires on all 3 nodes and little less in ceph.log daemon.log and some osd.logs within some days) and the sources were osds flaged down & out (see debug attachment) which - correct me, if I'am wrong, should be...
  17. I

    various ceph-octopus issues

    If you think that the logging problem is relevant
  18. I

    various ceph-octopus issues

    you asked: Are these message observed on other nodes as well? These messages were only connected to one osd (pls. check the attachment) and logged in syslog of the hosting node only. you asked: Are all the VM/CT running? Nothing was active - as mentioned I' tried to get rid of all vms (no...
  19. I

    New to pve, trying to delete ceph pool

    reference: https://www.mankier.com/8/rbd ls [-l | --long] [pool-name] Will list all rbd images listed in the rbd_directory object. With -l, also show snapshots, and use longer-format output including size, parent (if clone), format, etc...