Search results

  1. R

    Unsuccessful deleting VM templates, I get an error in the tasks display: "listing images failed"

    I found the culprit and also found that I had a similar problem on a different cluster before: broken rbd The problem was, that in the ceph pool I described pxd: pxd_a-metadata there was one rbd that was broken. Running rbd -p pxd_a-metadata you see it, but trying to get information about it...
  2. R

    Unsuccessful deleting VM templates, I get an error in the tasks display: "listing images failed"

    Hello, since recently I experience a strange problem when trying to delete a VM-template on a VM cluster named pxa no matter which template I try. Templates and VM storage for VMs on pxa reside on rbd "ceph-storage" of another pve cluster named pxd. pxa has no own mass storage. pxd is a...
  3. R

    Died disk, osd is down and out, how to repair?

    Well than you gave the answer :-). If osd out does not change anything but to make an OSD be not part of a cluster then I can use my initial solution and simply switch it to in again. Have a nice day Rainer
  4. R

    Died disk, osd is down and out, how to repair?

    Well the reason for my proposal is that this is the way I replace broken disks in a pure Nautilus cluster I run, when an osd has a disk failure. Then the ceph-volume call from above (and a new disk) are enough to "repair" the osd. Its then "in" and "up". The small difference here is that...
  5. R

    Died disk, osd is down and out, how to repair?

    Here are the requested data: # ceph -s cluster: id: xyz health: HEALTH_WARN 2 devices have ident light turned on services: mon: 3 daemons, quorum pve01,pve05,pve11 (age 5w) mgr: pve05(active, since 5w), standbys: pve11, pve01 osd: 72 osds: 70 up (since 29h), 70 in (since 29h) data...
  6. R

    Died disk, osd is down and out, how to repair?

    Hello, recently two disks on two different servers of a hyperconverged pve cluster died. ceph rebalanced and is healthy again. So I will get two new disks, insert them into the nodes and then.....? At the moment both osds are marked down and out in the output ceph osd tree. Both are still...
  7. R

    pve 8 and pre Quincy hyperconverged ceph versions possible

    At the moment I am running two pve clusters both with pve7.4. One of the two is using storage from a external ceph cluster running ceph Nautilus (14.2.22). This is working for me without any problems. Now in the online docs "Upgrade from 7 to 8" there under Prerequisites i read that for...
  8. R

    Some odds after a cluster broke into single peaces and healed again (a bit lengthy, sorry)

    I was able to find the problem. I started with the host with only one cluster-link I thought. I rebooted it and after the reboot was done it was an isolated node. By looking at the logs of other clustermembers that still were part of the cluster(all green) I could see that there seemed to be...
  9. R

    Some odds after a cluster broke into single peaces and healed again (a bit lengthy, sorry)

    Hello, I run a 8 node pve cluster version "pve-manager/7.4-3/9002ab8a" . Last Friday this cluster suddenly broke down. At first the web interface showed only two hosts marked red, after a while all nodes were red. The reason might have been a network loop someone created around this time, but...
  10. R

    How to modify pve cluster firewall rules after they have been set

    Hello, I have a strange firewall related issue and found a solution which consists in deleting one iptables rule that is placed in chain PVEFW-FORWARD. The problem this rule causes is that it prevents two VMs running on the very same host to some degree from talking to one another with...
  11. R

    7.3 install failed with error "Cannot run in framebuffer mode"

    I had the same problem installing pve 7.3 on a Supermicro system With AMD processor. The graphics device here is an ASPEED card and the Xorg driver needed but not found is named ast. I found these infos in Xorgs logfile after Xorg died. This solution to get Xorg running is based on...
  12. R

    pve versions when extending a existing pve cluster with new nodes

    Hello, I am running a five node pve cluster with PVE version 7.3-4 . I would like to add three nodes which at the moment build a small cluster on their own. So I have to reinstall these nodes and then join each new installed node to the existing five node cluster. The question is how...
  13. R

    Strange disk-hanger problem when using a filesystem on lv with striped disks

    Hello, to gains some extra performance for a (non hyperconverged nautilus) ceph-storage on spinners we configured several proxmox VMs "years" ago to use a stripe across 4 or 6 vm- (rbd ceph) disks. The VMs disks are used as pv's (LVM) and each logical volume is created as stripe across the...
  14. R

    ceph df shows 0 for data pool in a ec pool setup

    I use proxmox pve 7.3 on one cluster with a lot of VMs and all of them have their storage on a hyperconverged ceph pool with EC. No problem at all, except for pve webinterface that does forget about the datapool option when a rbd storage on EC is used on a remote pve cluster as I just learned...
  15. R

    ceph df shows 0 for data pool in a ec pool setup

    On the side of cluster "A" the storage.cfg entry for the pool in question looks like this: rbd: pool_a content images krbd 0 monhost a.b.c.d pool pool_a-metadata username admin On Cluster "D" I also took a look at storage.cfg for pool_d: rbd: pool_d...
  16. R

    ceph df shows 0 for data pool in a ec pool setup

    You are right I forgot: Its rbd I use for my VMs.
  17. R

    ceph df shows 0 for data pool in a ec pool setup

    There are technical details for RBDs/cephfs in EC-pools that are the reason that two pools are needed. One for the data and another for the metadata. In ceph df you usually see that the metadata pool uses some storage but the data pool much more, which is not the case in my setup. The two pools...
  18. R

    ceph df shows 0 for data pool in a ec pool setup

    Hello, I have a pve cluster "A" (7.3) which has NO hyperconverged ceph. There is another pve cluster "D" (7.3) which has a lot of ceph storage. So I created one 5+3 ec pool using pveceph pool create pool_d --erasure-coding k=5,m=3 which results in a pool_d-data and a pool_d-metadata. Next I...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!