Recent content by skydiablo

  1. S

    How to configure HA to shut down specific VMs before migration in Proxmox VE 9.1?

    Hi everyone, I’m running Proxmox VE 9.1 with a multi-node HA cluster. Most of my VMs can be live-migrated without issues, but some VMs (due to local storage or other constraints) need to be shut down before migration to avoid errors or data corruption. Is there a way to configure HA for...
  2. S

    [SOLVED] HA-Migration auf vorherige Node

    moin, meiner erfahrung nach, wenn du die priorität einfach für alle nodes auf den gleichen wert setzt und failback=1, kannst du die VMs frei verschieben und nach einem ausfall kommen die VMs dahin zurück wo sie waren.
  3. S

    [SOLVED] Issue with Listing VM Disks and CT Volumes in Ceph Pool via GUI

    oh, already asked before: https://forum.proxmox.com/threads/ceph-rbd-storage-zeigt-rbd-error-listing-images-failed-2-no-such-file-or-directory-500-%E2%80%93-trotz-funktionierendem-ceph-und-korrektem-keyring.168447/ FIXED!
  4. S

    [SOLVED] Issue with Listing VM Disks and CT Volumes in Ceph Pool via GUI

    Hello everyone, I am currently experiencing an issue with my Ceph cluster and I was hoping someone might be able to provide some insight or guidance. In my Ceph cluster, I have two pools: "volumes" and "volumes_ssd," each with its own CRUSH rule. In Proxmox Virtual Environment 8.4.5, I can...
  5. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    i got it! it was a MTU missmatch between interface and switch port. thx for attention! back reference: https://forum.proxmox.com/threads/connectx-4-ceph-cluster-network-issue.49255/
  6. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    I have tested all possible solutions, but I cannot identify the source of the problem. To recap: I have 3 servers running as a Ceph cluster, and 2 another servers dedicated to VMs. This is the existing setup, running on the latest versions (PVE 8.3 and Ceph Squid), and everything works fine. I...
  7. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    i have reinstall a server and added to the cluster, no way to access any ceph storage?
  8. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    Hello! I have a PVE cluster consisting of 8 servers that has been running for quite some time (since PVE v5 days) and always kept up to date. A big shoutout to the developers—everything runs very smoothly! Three of the servers act as a Ceph cluster (with 10 disks each), and two additional...
  9. S

    [TUTORIAL] USB Automount

    does this also available for proxmox-backup-server?
  10. S

    second backup to an external drive

    hi, i'm running an pbs with no errors, all works fine! now i have attach an USB drive to my server, add drive via zfs and got an new datastore. all fine, but now i want to store on this new datastore the backups i have already stored on the first datastore. maybe just the latest snapshot, so the...
  11. S

    ceph MDS: corrupt sessionmap values: Corrupt entity name in sessionmap: Malformed input

    for now, i have tried this: # systemctl stop ceph-mds@pve04.service # cephfs-journal-tool --rank=cephfs:0 event recover_dentries summary # cephfs-journal-tool --rank=cephfs:0 journal reset # cephfs-table-tool all reset session # systemctl start ceph-mds@pve04.service # ceph mds repaired 0 and...
  12. S

    ceph MDS: corrupt sessionmap values: Corrupt entity name in sessionmap: Malformed input

    hi! my cephfs is broken and i can not recover the mds-daemons. yesterday i have update pve v6 to v7 and my ceph-cluster from v15 to v16 and i thought all working fine. next day (today) some of my services goes down and throw errors, so i dig into and find my cephfs is down and cannot restart...
  13. S

    [SOLVED] ZFS migration -vs- replication

    okay, solved! das ding muss wirklich überall gleich heißen! der zfs-pool muss identisch heißen und dann reicht es auch den storage nur 1x an zu legen und auf die vorbereiteten server zu limitieren. läuft nun, manchmal muss einfach nur jemand den denk-anstoß geben! vielen dank dafür! ergo, ich...
  14. S

    [SOLVED] ZFS migration -vs- replication

    da gebe ich dir recht, wollte ich auch so machen, ABER: ich lege auf meinem beiden servern meine zfs-pools an (im aktuellen fall mit unterschiedlichen namen), wenn ich jetzt ein zfs storage anlegen möchte muss ich ja den entsprechenden pool angeben. also gehe ich auf den server (GUI) pve08 mit...
  15. S

    [SOLVED] ZFS migration -vs- replication

    jetzt habe ich noch einmal alle pools gelöscht via zpool destroy local-ssd und fdisk /dev/sdX und via GUI die storages entfernt. nun habe ich auf beiden servern (pve07 + pve08) jeweils ein neuen pool (zwei SSDs via mirror) erstellt und diesmal für jeden server ein anderen pool namen verwendet...