Search results

  1. R

    ceph df shows 0 for data pool in a ec pool setup

    Hello, I have a pve cluster "A" (7.3) which has NO hyperconverged ceph. There is another pve cluster "D" (7.3) which has a lot of ceph storage. So I created one 5+3 ec pool using pveceph pool create pool_d --erasure-coding k=5,m=3 which results in a pool_d-data and a pool_d-metadata. Next I...
  2. R

    Possibly problem cloning via kernel rbd in PVE

    Hello, I run several 7.3 pve clusters. Two of these cluster are NOT hyperconverged and instead use an external nautilus ceph cluster for storage. Of course each of both pve clusters use a different ceph pool of this ceph cluster. On one of the pve clusters (call it "B" ) I saw that for the...
  3. R

    Problem with remove disk on ceph storage

    Today I had a similar problem when I removed some VMs from a pve 7.2 system with 3 hosts and some ceph VM images (vm-33-disk-2, ..). Storage in my setup is provided by an external Nautilus cluster. For these rbd images there were watchers and I was unable to "rbd rm" them. So just like...
  4. R

    hyperconverged pve: To Upgrade or simply stay with a running system

    Hello, I would like to ask you how you deal with updates for pve and hyperconverged ceph? I am now considering to update a productive pve 7.2 cluster with 12 hosts running Octopus. With an update to 7.3 I also would have to update the hosts cluster to at least ceph pacific. There is a...
  5. R

    Shutdown of the Hyper-Converged Cluster (CEPH)

    Is this recommendation only useful to do maintenance for a single host, or can I also stop all VMs, set noout for hyperconverged ceph and then disable pve-ha-crm & pve-ha-lrm on all pve cluster nodes before shutting down all the nodes one after another eg in case of a power outage with an ups...
  6. R

    Migrate VMs in proxmox pool to usage of krbd instead of rbd

    I have a proxmox 6.4 pve-pool that accesses an external ceph Nautilus cluster for rbd storage (in an erasure coded ceph-pool). All about 80 VMs are currently *not* using krbd to access their rbds. Recently I "rediscovered" krbd and now I am thinking how to migrate existing VMs so that they will...
  7. R

    Snapshot Problems

    Thanks for your answer. I did a poweroff and the problem was gone.
  8. R

    Snapshot Problems

    Hello, today I wanted to create a vm snapshot. The storage backend is a own ceph cluster so its not hyperconverged. I forgot to deactivate the RAM snapshot option and started, it slowly proceeded the VMs RAM and so I decided to cancel the snapshot. Afterwards I had a locked VM with am...
  9. R

    cannot remove templates any longer after upgrade to pve 6.4

    After more searching I found this posting: https://forum.proxmox.com/threads/rbd-error-rbd-listing-images-failed-2-no-such-file-or-directory-500.66866/ So I also looped over all rbds in the pool with the template I could not delete: for i in `rbd -p pxa-rbd ls`; do echo "**** $i"; rbd -p...
  10. R

    cannot remove templates any longer after upgrade to pve 6.4

    This morning I upgraded my pve cluster (5 hosts) from 6.3 to PVE 6.4. Basically everything work fine except for one detail: I cannot delete VM templates. If I try I get an error message saying: TASK ERROR: rbd error: rbd: listing images failed: (2) No such file or directory My pve-version is...
  11. R

    was: cannot remove templates any longer after upgrade to pve 6.4

    Sorry selected the completely wrong forum. Reposted article in Proxmox Virtual Environment -> Proxmox VE: Installation and configuration . Sorry
  12. R

    removing a template: error during cfs-locked 'storage-ceph' operation: rbd snap purge

    Hello, since today I have a strange Problem with my proxmox installation. The storage backend used is ceph nautilus. All VMs are created from templates as full clones. It seems whenever I delete a VM-template I get this kind of error messages since today: Removing all snapshots: 0%...
  13. R

    Probleme mit dem Herunterfahren von Win 10 VM

    Ich habe jetzt Win10 nocheinmal ganz neu installiert und upgedated. Mein Problem war das ich nicht zuerst in Proxmox bei den "Options" der Windows-VM den Guest-Agent aktiviert hatte, sondern dachte erst den Treiber installieren zu müssen. Daher gabs bei mir das Simple-PCI-Device nicht. Das habe...
  14. R

    Probleme mit dem Herunterfahren von Win 10 VM

    Das hat leider nicht geholfen. Ich mache die Win10-Test-Maschine nochmal von vorne neu und nehme gleich die latest-treiber. Ich sag Bescheid, obs dann klappt.....
  15. R

    Probleme mit dem Herunterfahren von Win 10 VM

    Ja genau die Wiki-Seite hatte ich als Anleitung verwendet. Allerdings musste ich die Reihenfolge ändern: Zuerst den guest-agent installieren und aktivieren und dann das PCI-Simple-Communications-Device mit dem vioserial Treiber versorgen. Ich hatte ursprünglich die stable version...
  16. R

    Probleme mit dem Herunterfahren von Win 10 VM

    Hallo, ich habe proxmox 6.1-3 installiert und testweise eine Windoes10 VM eingerichtet und auf einen aktuellen Patchstand gebracht. Zudem habe ich den guest-agent installiert und in proxmox auch aktiviert. Ein qm agent <vm-id> ping klappt ohne Fehler auf dem Node auf dem die VM läuft. Das...
  17. R

    Sicherheit von Resourcepools

    Hallo hitman, nein das meinte ich nicht. Da gehts ja um das Management von proxmox mit einer von Dir entwickelten Software als Ergänzung zum Standard Webinterface. Im Moment bin ich noch bei der Standard Weboberfläche.... Mir geht es darum ob ein Benutzer mit eben der Standard...
  18. R

    Sicherheit von Resourcepools

    Hallo, ich habe vor Kurzem einen Test-Cluster aus drei Maschinen mit Proxmox VE 6 installiert. Das Interface gefällt mit sehr gut. Das Storage Backend ist ein eigener CEPH-Cluster. Wir möchten Proxmox nutzen, um verschiedenen Arbeitsgruppen die weitgehende Selbtverwaltung von eigenen VMs zu...