I found the culprit and also found that I had a similar problem on a different cluster before: broken rbd
The problem was, that in the ceph pool I described pxd: pxd_a-metadata there was one rbd that was broken. Running rbd -p pxd_a-metadata you see it, but trying to get information about it...
Hello,
since recently I experience a strange problem when trying to delete a VM-template on a VM cluster named pxa no matter which template I try. Templates and VM storage for VMs on pxa reside on rbd "ceph-storage" of another pve cluster named pxd. pxa has no own mass storage. pxd is a...
Well than you gave the answer :-). If osd out does not change anything but to make an OSD be not part of a cluster then I can use my initial solution and simply switch it to in again.
Have a nice day
Rainer
Well the reason for my proposal is that this is the way I replace broken disks in a pure Nautilus cluster I run, when an osd has a disk failure. Then the ceph-volume call from above (and a new disk) are enough to "repair" the osd. Its then "in" and "up".
The small difference here is that...
Hello,
recently two disks on two different servers of a hyperconverged pve cluster died. ceph rebalanced and is healthy again. So I will get two new disks, insert them into the nodes and then.....?
At the moment both osds are marked down and out in the output ceph osd tree. Both are still...
At the moment I am running two pve clusters both with pve7.4. One of the two is using storage from a external ceph cluster running ceph Nautilus (14.2.22). This is working for me without any problems.
Now in the online docs "Upgrade from 7 to 8" there under Prerequisites i read that for...
I was able to find the problem.
I started with the host with only one cluster-link I thought. I rebooted it and after the reboot was done it was an isolated node. By looking at the logs of other clustermembers that still were part of the cluster(all green) I could see that there seemed to be...
Hello,
I run a 8 node pve cluster version "pve-manager/7.4-3/9002ab8a" . Last Friday this cluster suddenly broke down. At first the web interface showed only two hosts marked red, after a while all nodes were red. The reason might have been a network loop someone created around this time, but...
Hello,
I have a strange firewall related issue and found a solution which consists in deleting one iptables rule that is placed in chain PVEFW-FORWARD.
The problem this rule causes is that it prevents two VMs running on the very same host to some degree from talking to one another with...
I had the same problem installing pve 7.3 on a Supermicro system With AMD processor. The graphics device here is an ASPEED card and the Xorg driver needed but not found is named ast. I found these infos in Xorgs logfile after Xorg died.
This solution to get Xorg running is based on...
Hello,
I am running a five node pve cluster with PVE version 7.3-4 . I would like to add three nodes which at the moment build a small cluster on their own. So I have to reinstall these nodes and then join each new installed node to the existing five node cluster.
The question is how...
Hello,
to gains some extra performance for a (non hyperconverged nautilus) ceph-storage on spinners we configured several proxmox VMs "years" ago to use a stripe across 4 or 6 vm- (rbd ceph) disks. The VMs disks are used as pv's (LVM) and each logical volume is created as stripe across the...
I use proxmox pve 7.3 on one cluster with a lot of VMs and all of them have their storage on a hyperconverged ceph pool with EC. No problem at all, except for pve webinterface that does forget about the datapool option when a rbd storage on EC is used on a remote pve cluster as I just learned...
On the side of cluster "A" the storage.cfg entry for the pool in question looks like this:
rbd: pool_a
content images
krbd 0
monhost a.b.c.d
pool pool_a-metadata
username admin
On Cluster "D" I also took a look at storage.cfg for pool_d:
rbd: pool_d...
There are technical details for RBDs/cephfs in EC-pools that are the reason that two pools are needed. One for the data and another for the metadata. In ceph df you usually see that the metadata pool uses some storage but the data pool much more, which is not the case in my setup. The two pools...
Hello,
I have a pve cluster "A" (7.3) which has NO hyperconverged ceph. There is another pve cluster "D" (7.3) which has a lot of ceph storage. So I created one 5+3 ec pool using pveceph pool create pool_d --erasure-coding k=5,m=3 which results in a pool_d-data and a pool_d-metadata. Next I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.