Search results

  1. W

    Ceph OSDs marked out, but still rebalance when I remove them.

    Hey aarcane, iirc you better set the crush weight of the "leaving" osd to zero. So the weight of the host is also altered. Otherwise the second rebalancing occured due to altered host weight after destroying osd. Setting osd out on it´s own does not alter the host weight (iirc).
  2. W

    Ceph is not configured to be really HA

    but ceph is not that simple than only count# of hosts or so. and i doubt, that there some source says "odd number of server" for ceph. but that´s all iirc.
  3. W

    Proxmox not listening on default port

    Maybe a mismatch with ipv4 / ipv6 use. Configured ipv6 and now trying by ipv4? i found: https://forum.proxmox.com/threads/web-interface-ipv6-only.44101/
  4. W

    [SOLVED] going for max speed with proxmox 7; how to do it?

    you can add the path. root@pve13:~# man pveperf PVEPERF(1) Proxmox VE Documentation PVEPERF(1) NAME...
  5. W

    VM 10GBit NIC working slow like 1GBit

    Hey Patrick, maybe misconfigured offloading features hurt the performance. I have no details here, but i often read about tso / lro / checksum offloading. maybe better turn it of in the vm. iirc ethtool -K or something.
  6. W

    PBS backing up to RBD ceph volume- Alternatives?

    Hey, i don´t think, that ceph people will miss this, but taking the fact that it´s sometimes good and sometimes bad: have you checked, that there is no scrubbing active while the bad times?
  7. W

    Can not re-add OSD after destroy it

    Hey David, i think you have to "zap" your disk. But that´s a guess only. Maybe this is the solution for you: But please double check device, because command does, what it says, it destroys!! ceph-volume lvm zap --destroy /dev/sdb
  8. W

    Tausch aller OSDs

    Hallo Thoe, wenn man genügend Zeit und Platzreserve hat, ist das eine stressfreie Vorgehensweise. Ungefähr so wie Du beschrieben hast. Wir machen: 1. OSD auf out setzen. Nicht auf Stop. Dann organisiert (rebalance) sich der Ceph Cluster neu. (Geht nur bei genügend freier Kapazität). Wenn Du...
  9. W

    Problem with volblocksize

    Hi, in the gui at cluster level in the storage configuration you can set the block size of your zfs pool. This size is choosen for new "disks" as in your example. If the backup does not fit, it is often because block size is to big. Try 4k. But be warned, that may lead to bad performance. (iirc...
  10. W

    [SOLVED] Host OOM killing containers and KVM, but plenty of RAM available

    Hi, i can´t give you technical details, but maybe it has to do with fragmentation. As i understand, not only the sum of free RAM is important, but also the free RAM in the right category and in the right size. Jan 20 04:13:50 proxmox-1 kernel: [86074.425514] Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB...
  11. W

    No previous backup found, cannot do incremental backup

    I have not checked all details of this thread, so take this as silly question/hint only: have the source and the target the same timezone configured? I saw a 1 hour diff between stamps. And yesterday i had a weird problem with ceph/osd (wholy other topic, i know) due to different times. (reason...
  12. W

    [SOLVED] Startprobleme nach Upgrade auf 6.3

    to be honest, it´s "rtfm" that works. Please mark the thread as solved.
  13. W

    [SOLVED] Startprobleme nach Upgrade auf 6.3

    Recovery If you have major problems with your Proxmox VE host, e.g. hardware issues, it could be helpful to just copy the pmxcfs database file /var/lib/pve-cluster/config.db and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the pve-cluster service...
  14. W

    zvol volblocksize doubts

    Hello Héctor, if you have enough storage, you can change the volblocksize parameter in the storage configuration of the cluster and move the disk away and back. Than the volblocksize of the new zvol reflects the changed value. As for the question for best values, i think you have to benchmark...
  15. W

    HEALTH_WARN 1 daemons have recently crashed

    i used today on command line: ceph crash ls that gives a list of archived and new crashes. then you can archive your crash: ceph crash archive 2020-10-29_03:47:12.641232Z_843e3d9d-bc56-46dc-8175-9026fa7f44a4 (you must replace the id with your values of course) (sorry, did not see that you...
  16. W

    how to identify OSD / physical drive

    ceph-volume lvm list may also help. depends on version of ceph and your concrete usage.
  17. W

    [SOLVED] Yet another bridge issue

    as i understand, you want the VMs to use the bridge, so you must ping from the VMs. Proxmox itself don´t "uses" the bridge in my opinion. To the sysctl.conf: you must look yourself, it was only a hint. I do not know the right values for your situation.
  18. W

    [SOLVED] Yet another bridge issue

    if it comes to arp "problems" with multible NICs on linux please use google for: net.ipv4.conf.all.arp_ignore net.ipv4.conf.all.arp_announce net.ipv4.conf.all.arp_filter i don´t know, if it is still a point with modern kernels, but i think so.
  19. W

    [SOLVED] Yet another bridge issue

    i think you dont want/need an ip on the second bridge - vmbr1. And if you want an ip, then probably not in the same subnet.
  20. W

    Besserer OSD Ausgleich

    Aus dem Bauch heraus würde ich die Anzahl pg erhöhen. 1024 entspricht zwar den Empfehlungen bzgl. Anzahl osd. Aber ich meine, dass gerade das Phänomen was ihr beobachtet bei zu kleinem pg_num passieren kann.