Search results

  1. R

    What does pve-rbd-storage-configure-keyring exactly do ?

    Hello, I run two pve-clusters (8.4.11) . One named "D" which has ceph storage is running without VMs (too little RAM), but has a ceph storage cluster configured. The second cluster "A" has no own ceph cluster and so no own storage but many VMs. To bring both together cluster A is simply using...
  2. R

    Unsuccessful deleting VM templates, I get an error in the tasks display: "listing images failed"

    Hello, since recently I experience a strange problem when trying to delete a VM-template on a VM cluster named pxa no matter which template I try. Templates and VM storage for VMs on pxa reside on rbd "ceph-storage" of another pve cluster named pxd. pxa has no own mass storage. pxd is a...
  3. R

    Died disk, osd is down and out, how to repair?

    Hello, recently two disks on two different servers of a hyperconverged pve cluster died. ceph rebalanced and is healthy again. So I will get two new disks, insert them into the nodes and then.....? At the moment both osds are marked down and out in the output ceph osd tree. Both are still...
  4. R

    pve 8 and pre Quincy hyperconverged ceph versions possible

    At the moment I am running two pve clusters both with pve7.4. One of the two is using storage from a external ceph cluster running ceph Nautilus (14.2.22). This is working for me without any problems. Now in the online docs "Upgrade from 7 to 8" there under Prerequisites i read that for...
  5. R

    Some odds after a cluster broke into single peaces and healed again (a bit lengthy, sorry)

    Hello, I run a 8 node pve cluster version "pve-manager/7.4-3/9002ab8a" . Last Friday this cluster suddenly broke down. At first the web interface showed only two hosts marked red, after a while all nodes were red. The reason might have been a network loop someone created around this time, but...
  6. R

    How to modify pve cluster firewall rules after they have been set

    Hello, I have a strange firewall related issue and found a solution which consists in deleting one iptables rule that is placed in chain PVEFW-FORWARD. The problem this rule causes is that it prevents two VMs running on the very same host to some degree from talking to one another with...
  7. R

    pve versions when extending a existing pve cluster with new nodes

    Hello, I am running a five node pve cluster with PVE version 7.3-4 . I would like to add three nodes which at the moment build a small cluster on their own. So I have to reinstall these nodes and then join each new installed node to the existing five node cluster. The question is how...
  8. R

    Strange disk-hanger problem when using a filesystem on lv with striped disks

    Hello, to gains some extra performance for a (non hyperconverged nautilus) ceph-storage on spinners we configured several proxmox VMs "years" ago to use a stripe across 4 or 6 vm- (rbd ceph) disks. The VMs disks are used as pv's (LVM) and each logical volume is created as stripe across the...
  9. R

    ceph df shows 0 for data pool in a ec pool setup

    Hello, I have a pve cluster "A" (7.3) which has NO hyperconverged ceph. There is another pve cluster "D" (7.3) which has a lot of ceph storage. So I created one 5+3 ec pool using pveceph pool create pool_d --erasure-coding k=5,m=3 which results in a pool_d-data and a pool_d-metadata. Next I...
  10. R

    Possibly problem cloning via kernel rbd in PVE

    Hello, I run several 7.3 pve clusters. Two of these cluster are NOT hyperconverged and instead use an external nautilus ceph cluster for storage. Of course each of both pve clusters use a different ceph pool of this ceph cluster. On one of the pve clusters (call it "B" ) I saw that for the...
  11. R

    hyperconverged pve: To Upgrade or simply stay with a running system

    Hello, I would like to ask you how you deal with updates for pve and hyperconverged ceph? I am now considering to update a productive pve 7.2 cluster with 12 hosts running Octopus. With an update to 7.3 I also would have to update the hosts cluster to at least ceph pacific. There is a...
  12. R

    Migrate VMs in proxmox pool to usage of krbd instead of rbd

    I have a proxmox 6.4 pve-pool that accesses an external ceph Nautilus cluster for rbd storage (in an erasure coded ceph-pool). All about 80 VMs are currently *not* using krbd to access their rbds. Recently I "rediscovered" krbd and now I am thinking how to migrate existing VMs so that they will...
  13. R

    Snapshot Problems

    Hello, today I wanted to create a vm snapshot. The storage backend is a own ceph cluster so its not hyperconverged. I forgot to deactivate the RAM snapshot option and started, it slowly proceeded the VMs RAM and so I decided to cancel the snapshot. Afterwards I had a locked VM with am...
  14. R

    cannot remove templates any longer after upgrade to pve 6.4

    This morning I upgraded my pve cluster (5 hosts) from 6.3 to PVE 6.4. Basically everything work fine except for one detail: I cannot delete VM templates. If I try I get an error message saying: TASK ERROR: rbd error: rbd: listing images failed: (2) No such file or directory My pve-version is...
  15. R

    was: cannot remove templates any longer after upgrade to pve 6.4

    Sorry selected the completely wrong forum. Reposted article in Proxmox Virtual Environment -> Proxmox VE: Installation and configuration . Sorry
  16. R

    removing a template: error during cfs-locked 'storage-ceph' operation: rbd snap purge

    Hello, since today I have a strange Problem with my proxmox installation. The storage backend used is ceph nautilus. All VMs are created from templates as full clones. It seems whenever I delete a VM-template I get this kind of error messages since today: Removing all snapshots: 0%...
  17. R

    Probleme mit dem Herunterfahren von Win 10 VM

    Hallo, ich habe proxmox 6.1-3 installiert und testweise eine Windoes10 VM eingerichtet und auf einen aktuellen Patchstand gebracht. Zudem habe ich den guest-agent installiert und in proxmox auch aktiviert. Ein qm agent <vm-id> ping klappt ohne Fehler auf dem Node auf dem die VM läuft. Das...
  18. R

    Sicherheit von Resourcepools

    Hallo, ich habe vor Kurzem einen Test-Cluster aus drei Maschinen mit Proxmox VE 6 installiert. Das Interface gefällt mit sehr gut. Das Storage Backend ist ein eigener CEPH-Cluster. Wir möchten Proxmox nutzen, um verschiedenen Arbeitsgruppen die weitgehende Selbtverwaltung von eigenen VMs zu...