Search results

  1. W

    [SOLVED] going for max speed with proxmox 7; how to do it?

    you can add the path. root@pve13:~# man pveperf PVEPERF(1) Proxmox VE Documentation PVEPERF(1) NAME...
  2. W

    VM 10GBit NIC working slow like 1GBit

    Hey Patrick, maybe misconfigured offloading features hurt the performance. I have no details here, but i often read about tso / lro / checksum offloading. maybe better turn it of in the vm. iirc ethtool -K or something.
  3. W

    PBS backing up to RBD ceph volume- Alternatives?

    Hey, i don´t think, that ceph people will miss this, but taking the fact that it´s sometimes good and sometimes bad: have you checked, that there is no scrubbing active while the bad times?
  4. W

    Can not re-add OSD after destroy it

    Hey David, i think you have to "zap" your disk. But that´s a guess only. Maybe this is the solution for you: But please double check device, because command does, what it says, it destroys!! ceph-volume lvm zap --destroy /dev/sdb
  5. W

    Tausch aller OSDs

    Hallo Thoe, wenn man genügend Zeit und Platzreserve hat, ist das eine stressfreie Vorgehensweise. Ungefähr so wie Du beschrieben hast. Wir machen: 1. OSD auf out setzen. Nicht auf Stop. Dann organisiert (rebalance) sich der Ceph Cluster neu. (Geht nur bei genügend freier Kapazität). Wenn Du...
  6. W

    Problem with volblocksize

    Hi, in the gui at cluster level in the storage configuration you can set the block size of your zfs pool. This size is choosen for new "disks" as in your example. If the backup does not fit, it is often because block size is to big. Try 4k. But be warned, that may lead to bad performance. (iirc...
  7. W

    [SOLVED] Host OOM killing containers and KVM, but plenty of RAM available

    Hi, i can´t give you technical details, but maybe it has to do with fragmentation. As i understand, not only the sum of free RAM is important, but also the free RAM in the right category and in the right size. Jan 20 04:13:50 proxmox-1 kernel: [86074.425514] Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB...
  8. W

    No previous backup found, cannot do incremental backup

    I have not checked all details of this thread, so take this as silly question/hint only: have the source and the target the same timezone configured? I saw a 1 hour diff between stamps. And yesterday i had a weird problem with ceph/osd (wholy other topic, i know) due to different times. (reason...
  9. W

    [SOLVED] Startprobleme nach Upgrade auf 6.3

    to be honest, it´s "rtfm" that works. Please mark the thread as solved.
  10. W

    [SOLVED] Startprobleme nach Upgrade auf 6.3

    Recovery If you have major problems with your Proxmox VE host, e.g. hardware issues, it could be helpful to just copy the pmxcfs database file /var/lib/pve-cluster/config.db and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the pve-cluster service...
  11. W

    zvol volblocksize doubts

    Hello Héctor, if you have enough storage, you can change the volblocksize parameter in the storage configuration of the cluster and move the disk away and back. Than the volblocksize of the new zvol reflects the changed value. As for the question for best values, i think you have to benchmark...
  12. W

    HEALTH_WARN 1 daemons have recently crashed

    i used today on command line: ceph crash ls that gives a list of archived and new crashes. then you can archive your crash: ceph crash archive 2020-10-29_03:47:12.641232Z_843e3d9d-bc56-46dc-8175-9026fa7f44a4 (you must replace the id with your values of course) (sorry, did not see that you...
  13. W

    how to identify OSD / physical drive

    ceph-volume lvm list may also help. depends on version of ceph and your concrete usage.
  14. W

    [SOLVED] Yet another bridge issue

    as i understand, you want the VMs to use the bridge, so you must ping from the VMs. Proxmox itself don´t "uses" the bridge in my opinion. To the sysctl.conf: you must look yourself, it was only a hint. I do not know the right values for your situation.
  15. W

    [SOLVED] Yet another bridge issue

    if it comes to arp "problems" with multible NICs on linux please use google for: net.ipv4.conf.all.arp_ignore net.ipv4.conf.all.arp_announce net.ipv4.conf.all.arp_filter i don´t know, if it is still a point with modern kernels, but i think so.
  16. W

    [SOLVED] Yet another bridge issue

    i think you dont want/need an ip on the second bridge - vmbr1. And if you want an ip, then probably not in the same subnet.
  17. W

    Besserer OSD Ausgleich

    Aus dem Bauch heraus würde ich die Anzahl pg erhöhen. 1024 entspricht zwar den Empfehlungen bzgl. Anzahl osd. Aber ich meine, dass gerade das Phänomen was ihr beobachtet bei zu kleinem pg_num passieren kann.
  18. W

    Proxmox-ZFS-VM-Disks corrupt

    Hallo Wolfgang, ohne weitere Worte: Linux <hostname> 2.2.14 #1 Mon May 15 11:35:14 MEST 2000 i?86 unknown Ich bin froh, dass ich das mit net0: pcnet=<....> am Laufen habe. Gruß
  19. W

    Proxmox-ZFS-VM-Disks corrupt

    Wir haben das (auch) bei CEPH. Leider ist das System in der VM so alt, dass ich nur IDE kann.
  20. W

    Ceph Question: Replace OSDs

    I think it depends. If you go host by host, you have inbalanced between hosts and the data migrates between. If you add all 3 disks same time, then at least the host weight stays equally distributed.