Search results

  1. W

    Proxmox VE 7.1 released!

    me :-) but due to the many messages here to this problem we only have migrated a few VMs already to the new 7.1; but all of them without any errors so far.
  2. W

    Ceph spinnt nach host reinstall

    ceph.conf -> /etc/pve/ceph.conf sollte das sein, nicht ceph.conf -> ../ceph.conf ich meine, ich bin da auch schon mal drauf reingefallen.
  3. W

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    Hello, could it be a RAM bottleneck. I somewhere read that default "per osd buffer" increased. just a guess.
  4. W

    CEPH Health warnings - how to resolve?

    I go with itNGO and tom and don´t understand your needs. If it is a test cluster only, you can try playing with single host rule. That may fit your needs if you want to learn ceph.
  5. W

    ZFS pool lost after power outage

    Hey, have you checked if the pve mounted the pool already?
  6. W

    Ceph min_size for large clusters

    Hi! then you have only one copy of your data left. In case of problems. min-size is not regarding monitors, but data copies. Nodes that serve data by osd. You must differ between hypervisor cluster and ceph cluster / osd nodes. If you are planning to loose more than one node at a time, than...
  7. W

    Diagnosing slow ceph performance

    maybe a silly question, but did you enable jumbo frames on your switch?
  8. W

    Ceph OSDs marked out, but still rebalance when I remove them.

    Hey aarcane, iirc you better set the crush weight of the "leaving" osd to zero. So the weight of the host is also altered. Otherwise the second rebalancing occured due to altered host weight after destroying osd. Setting osd out on it´s own does not alter the host weight (iirc).
  9. W

    Ceph is not configured to be really HA

    but ceph is not that simple than only count# of hosts or so. and i doubt, that there some source says "odd number of server" for ceph. but that´s all iirc.
  10. W

    Proxmox not listening on default port

    Maybe a mismatch with ipv4 / ipv6 use. Configured ipv6 and now trying by ipv4? i found: https://forum.proxmox.com/threads/web-interface-ipv6-only.44101/
  11. W

    [SOLVED] going for max speed with proxmox 7; how to do it?

    you can add the path. root@pve13:~# man pveperf PVEPERF(1) Proxmox VE Documentation PVEPERF(1) NAME...
  12. W

    VM 10GBit NIC working slow like 1GBit

    Hey Patrick, maybe misconfigured offloading features hurt the performance. I have no details here, but i often read about tso / lro / checksum offloading. maybe better turn it of in the vm. iirc ethtool -K or something.
  13. W

    PBS backing up to RBD ceph volume- Alternatives?

    Hey, i don´t think, that ceph people will miss this, but taking the fact that it´s sometimes good and sometimes bad: have you checked, that there is no scrubbing active while the bad times?
  14. W

    Can not re-add OSD after destroy it

    Hey David, i think you have to "zap" your disk. But that´s a guess only. Maybe this is the solution for you: But please double check device, because command does, what it says, it destroys!! ceph-volume lvm zap --destroy /dev/sdb
  15. W

    Tausch aller OSDs

    Hallo Thoe, wenn man genügend Zeit und Platzreserve hat, ist das eine stressfreie Vorgehensweise. Ungefähr so wie Du beschrieben hast. Wir machen: 1. OSD auf out setzen. Nicht auf Stop. Dann organisiert (rebalance) sich der Ceph Cluster neu. (Geht nur bei genügend freier Kapazität). Wenn Du...
  16. W

    Problem with volblocksize

    Hi, in the gui at cluster level in the storage configuration you can set the block size of your zfs pool. This size is choosen for new "disks" as in your example. If the backup does not fit, it is often because block size is to big. Try 4k. But be warned, that may lead to bad performance. (iirc...
  17. W

    [SOLVED] Host OOM killing containers and KVM, but plenty of RAM available

    Hi, i can´t give you technical details, but maybe it has to do with fragmentation. As i understand, not only the sum of free RAM is important, but also the free RAM in the right category and in the right size. Jan 20 04:13:50 proxmox-1 kernel: [86074.425514] Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB...
  18. W

    No previous backup found, cannot do incremental backup

    I have not checked all details of this thread, so take this as silly question/hint only: have the source and the target the same timezone configured? I saw a 1 hour diff between stamps. And yesterday i had a weird problem with ceph/osd (wholy other topic, i know) due to different times. (reason...
  19. W

    [SOLVED] Startprobleme nach Upgrade auf 6.3

    to be honest, it´s "rtfm" that works. Please mark the thread as solved.
  20. W

    [SOLVED] Startprobleme nach Upgrade auf 6.3

    Recovery If you have major problems with your Proxmox VE host, e.g. hardware issues, it could be helpful to just copy the pmxcfs database file /var/lib/pve-cluster/config.db and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the pve-cluster service...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!