me :-)
but due to the many messages here to this problem we only have migrated a few VMs already to the new 7.1; but all of them without any errors so far.
I go with itNGO and tom and don´t understand your needs.
If it is a test cluster only, you can try playing with single host rule. That may fit your needs if you want to learn ceph.
Hi!
then you have only one copy of your data left. In case of problems.
min-size is not regarding monitors, but data copies. Nodes that serve data by osd. You must differ between hypervisor cluster and ceph cluster / osd nodes.
If you are planning to loose more than one node at a time, than...
Hey aarcane,
iirc you better set the crush weight of the "leaving" osd to zero. So the weight of the host is also altered.
Otherwise the second rebalancing occured due to altered host weight after destroying osd. Setting osd out on it´s own does not alter the host weight (iirc).
but ceph is not that simple than only count# of hosts or so.
and i doubt, that there some source says "odd number of server" for ceph.
but that´s all iirc.
Maybe a mismatch with ipv4 / ipv6 use. Configured ipv6 and now trying by ipv4?
i found:
https://forum.proxmox.com/threads/web-interface-ipv6-only.44101/
Hey Patrick,
maybe misconfigured offloading features hurt the performance. I have no details here, but i often read about tso / lro / checksum offloading. maybe better turn it of in the vm.
iirc ethtool -K or something.
Hey,
i don´t think, that ceph people will miss this, but taking the fact that it´s sometimes good and sometimes bad: have you checked, that there is no scrubbing active while the bad times?
Hey David,
i think you have to "zap" your disk. But that´s a guess only.
Maybe this is the solution for you:
But please double check device, because command does, what it says, it destroys!!
ceph-volume lvm zap --destroy /dev/sdb
Hallo Thoe,
wenn man genügend Zeit und Platzreserve hat, ist das eine stressfreie Vorgehensweise. Ungefähr so wie Du beschrieben hast.
Wir machen:
1. OSD auf out setzen. Nicht auf Stop.
Dann organisiert (rebalance) sich der Ceph Cluster neu. (Geht nur bei genügend freier Kapazität). Wenn Du...
Hi,
in the gui at cluster level in the storage configuration you can set the block size of your zfs pool. This size is choosen for new "disks" as in your example.
If the backup does not fit, it is often because block size is to big.
Try 4k.
But be warned, that may lead to bad performance.
(iirc...
Hi,
i can´t give you technical details, but maybe it has to do with fragmentation.
As i understand, not only the sum of free RAM is important, but also the free RAM in the right category and in the right size.
Jan 20 04:13:50 proxmox-1 kernel: [86074.425514] Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB...
I have not checked all details of this thread, so take this as silly question/hint only: have the source and the target the same timezone configured?
I saw a 1 hour diff between stamps.
And yesterday i had a weird problem with ceph/osd (wholy other topic, i know) due to different times. (reason...
Recovery
If you have major problems with your Proxmox VE host, e.g. hardware issues, it could be helpful to just copy the pmxcfs database file /var/lib/pve-cluster/config.db and move it to a new Proxmox VE host. On the new host (with nothing running), you need to stop the pve-cluster service...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.