Search results

  1. S

    How to configure HA to shut down specific VMs before migration in Proxmox VE 9.1?

    Hi everyone, I’m running Proxmox VE 9.1 with a multi-node HA cluster. Most of my VMs can be live-migrated without issues, but some VMs (due to local storage or other constraints) need to be shut down before migration to avoid errors or data corruption. Is there a way to configure HA for...
  2. S

    [SOLVED] Issue with Listing VM Disks and CT Volumes in Ceph Pool via GUI

    Hello everyone, I am currently experiencing an issue with my Ceph cluster and I was hoping someone might be able to provide some insight or guidance. In my Ceph cluster, I have two pools: "volumes" and "volumes_ssd," each with its own CRUSH rule. In Proxmox Virtual Environment 8.4.5, I can...
  3. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    Hello! I have a PVE cluster consisting of 8 servers that has been running for quite some time (since PVE v5 days) and always kept up to date. A big shoutout to the developers—everything runs very smoothly! Three of the servers act as a Ceph cluster (with 10 disks each), and two additional...
  4. S

    second backup to an external drive

    hi, i'm running an pbs with no errors, all works fine! now i have attach an USB drive to my server, add drive via zfs and got an new datastore. all fine, but now i want to store on this new datastore the backups i have already stored on the first datastore. maybe just the latest snapshot, so the...
  5. S

    ceph MDS: corrupt sessionmap values: Corrupt entity name in sessionmap: Malformed input

    hi! my cephfs is broken and i can not recover the mds-daemons. yesterday i have update pve v6 to v7 and my ceph-cluster from v15 to v16 and i thought all working fine. next day (today) some of my services goes down and throw errors, so i dig into and find my cephfs is down and cannot restart...
  6. S

    [SOLVED] ZFS migration -vs- replication

    Moin, habe 2 VM-server mit identischer version: # pveversion pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve) nun habe ich dort jeweils zwei neue SSDs rein gezimmert und dort ein pool auf einem ZFS mirrored verbund definiert: # zpool status pool: local-ssd state: ONLINE scan...
  7. S

    VM auf ceph storage

    hallo, ich habe diverse VMs auf meinem ceph storage am laufen (im grunde auch keine probleme). nun habe ich eine VM mit einer software (']SCI) welche nun probleme macht. der hersteller sagt mein storage wäre zu langsam, jetzt bin ich mir nicht ganz sicher was ich davon halten soll. der...