Recent content by skydiablo

  1. S

    [SOLVED] Issue with Listing VM Disks and CT Volumes in Ceph Pool via GUI

    oh, already asked before: https://forum.proxmox.com/threads/ceph-rbd-storage-zeigt-rbd-error-listing-images-failed-2-no-such-file-or-directory-500-%E2%80%93-trotz-funktionierendem-ceph-und-korrektem-keyring.168447/ FIXED!
  2. S

    [SOLVED] Issue with Listing VM Disks and CT Volumes in Ceph Pool via GUI

    Hello everyone, I am currently experiencing an issue with my Ceph cluster and I was hoping someone might be able to provide some insight or guidance. In my Ceph cluster, I have two pools: "volumes" and "volumes_ssd," each with its own CRUSH rule. In Proxmox Virtual Environment 8.4.5, I can...
  3. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    i got it! it was a MTU missmatch between interface and switch port. thx for attention! back reference: https://forum.proxmox.com/threads/connectx-4-ceph-cluster-network-issue.49255/
  4. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    I have tested all possible solutions, but I cannot identify the source of the problem. To recap: I have 3 servers running as a Ceph cluster, and 2 another servers dedicated to VMs. This is the existing setup, running on the latest versions (PVE 8.3 and Ceph Squid), and everything works fine. I...
  5. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    i have reinstall a server and added to the cluster, no way to access any ceph storage?
  6. S

    [SOLVED] Ceph Mount Issues on Additional PVE Cluster Servers – Need Help

    Hello! I have a PVE cluster consisting of 8 servers that has been running for quite some time (since PVE v5 days) and always kept up to date. A big shoutout to the developers—everything runs very smoothly! Three of the servers act as a Ceph cluster (with 10 disks each), and two additional...
  7. S

    [TUTORIAL] USB Automount

    does this also available for proxmox-backup-server?
  8. S

    second backup to an external drive

    hi, i'm running an pbs with no errors, all works fine! now i have attach an USB drive to my server, add drive via zfs and got an new datastore. all fine, but now i want to store on this new datastore the backups i have already stored on the first datastore. maybe just the latest snapshot, so the...
  9. S

    ceph MDS: corrupt sessionmap values: Corrupt entity name in sessionmap: Malformed input

    for now, i have tried this: # systemctl stop ceph-mds@pve04.service # cephfs-journal-tool --rank=cephfs:0 event recover_dentries summary # cephfs-journal-tool --rank=cephfs:0 journal reset # cephfs-table-tool all reset session # systemctl start ceph-mds@pve04.service # ceph mds repaired 0 and...
  10. S

    ceph MDS: corrupt sessionmap values: Corrupt entity name in sessionmap: Malformed input

    hi! my cephfs is broken and i can not recover the mds-daemons. yesterday i have update pve v6 to v7 and my ceph-cluster from v15 to v16 and i thought all working fine. next day (today) some of my services goes down and throw errors, so i dig into and find my cephfs is down and cannot restart...
  11. S

    [SOLVED] ZFS migration -vs- replication

    okay, solved! das ding muss wirklich überall gleich heißen! der zfs-pool muss identisch heißen und dann reicht es auch den storage nur 1x an zu legen und auf die vorbereiteten server zu limitieren. läuft nun, manchmal muss einfach nur jemand den denk-anstoß geben! vielen dank dafür! ergo, ich...
  12. S

    [SOLVED] ZFS migration -vs- replication

    da gebe ich dir recht, wollte ich auch so machen, ABER: ich lege auf meinem beiden servern meine zfs-pools an (im aktuellen fall mit unterschiedlichen namen), wenn ich jetzt ein zfs storage anlegen möchte muss ich ja den entsprechenden pool angeben. also gehe ich auf den server (GUI) pve08 mit...
  13. S

    [SOLVED] ZFS migration -vs- replication

    jetzt habe ich noch einmal alle pools gelöscht via zpool destroy local-ssd und fdisk /dev/sdX und via GUI die storages entfernt. nun habe ich auf beiden servern (pve07 + pve08) jeweils ein neuen pool (zwei SSDs via mirror) erstellt und diesmal für jeden server ein anderen pool namen verwendet...
  14. S

    [SOLVED] ZFS migration -vs- replication

    Moin, habe 2 VM-server mit identischer version: # pveversion pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve) nun habe ich dort jeweils zwei neue SSDs rein gezimmert und dort ein pool auf einem ZFS mirrored verbund definiert: # zpool status pool: local-ssd state: ONLINE scan...
  15. S

    Using CIFS/NFS as datastore

    hi! all of this approaches seems a little bit hacky, why does PBS not support remote storage by default? is this an near planned upcomming feature? regards, volker.