Search results

  1. A

    Multiple problems installing multiple OS's

    Clean standard install of Debian, upon reboot I am greeted with this bs. It would be adviced to read INSIDE the damn disk and not outside!!! Who gets these funny ideas? If UEFI is selected everything starts to flicker nicely with deb 11. (/S!) Why does stuff like this keep happening?
  2. A

    All VMs locking up after latest PVE update

    We ran into the same problem today on 7.0-13 also seems to correlate with Proxmox Backup Server. Any fixes?
  3. A

    Feature request / Help Request | Shutdown after sync/backup job completes

    Hi, is there a way to have PBS shut down after a remote sync is complete? I would like the server to shut down and auto start up (by BIOS) to pull the sync down, then shut down again. I see no reason to have the 2nd off-site server powered on more time than required due to security concerns...
  4. A

    Poor write performance on ceph backed virtual disks.

    o/ Yeah, but as soon as I run a backup with 200mbyte/sec exchange crawls to almost a stop :/ Weak for 36 SSD's in 6 hosts.
  5. A

    Poor write performance on ceph backed virtual disks.

    You are lucky! You get 50mbyte/sec over 1Gb... I get that over 10Gbe!!! 2x switches MLAG, LACP 2+3 for public 2xswitches MLAG, LACP2+3 for cluster Super performance 36x 1->2TB consumer SSD's on 6 hosts. All hosts feature ZEN2 CPU's.
  6. A

    UI update suggestion

    A unified single interface for all three proxducts. (say that ten times fast! :) Just kinda "copy/paste" the two other UI's under their own section :)
  7. A

    ???? permissions of a file when the job is done?

    drive-scsi3: transferred 450.0 GiB of 450.0 GiB (100.00%) in 6m 25s, ready all 'mirror' jobs are ready drive-scsi3: Completing block job_id... drive-scsi3: Completed successfully. drive-scsi3: mirror-job finished TASK ERROR: storage migration failed: unable to open file...
  8. A

    ???? permissions of a file when the job is done?

    Thanks to this typical proxmox error I have tried to migrate the damn disk twice with same useless result! Why does stuff like this continually happen on freshly installed clusters? I don't get it... drive-scsi3: transferred 446.8 GiB of 450.0 GiB (99.28%) in 10m 19s drive-scsi3: transferred...
  9. A

    SSD Temperature issues smartd

    Hi, we have 4 servers with various SSD's from various vendors. Today I observed something interesting in the syslog of all servers that I went to investigate. I even went so far to order a trainee to the serverroom, ready to pull out the disk I specified when the syslog entry appeared, to check...
  10. A

    No cluster defined? LIES!

    I wonder what Happens if I click create cluster and go ahead with it... Smells like breakage!
  11. A

    No cluster defined? LIES!

    So.. what are those 5 nodes pretending to be part of? Why does this "still" happen on cleanly installed clusters? It has been happening ever since 7 was released.. One node down, all hell breaks loose. Can't SSH to hosts because the login is hanging. Webinterface wont log you in because the...
  12. A

    CEPH "22 OSD(s) have broken BlueStore compression.

    A nice little yellow warning appeared on my Ceph pool after having enabled and subsequently disabled lz4 compression. What does this mean? Pool runs "fine" but, how do I get rid of this error? 22 OSD(s) have broken BlueStore compression osd.0 unable to load:none osd.1 unable to load:none...
  13. A

    Shutdown applied to all nodes?!

    The man page is flawed and ommits important data.. Destroy ceph related data and configuration files. (on the entire cluster)
  14. A

    Reinstall CEPH on Proxmox 6

    You improved it even more in version 7. So when you run pveceph purge on node #6 the configuration vanishes from all other nodes, essentielly destroying our ceph cluster.. I clap my hands at such excellent software behaviour! Worst of all, after having done that, guess what... the problem that...
  15. A

    pveceph purge wipes entire ceph cluster, not just a host.

    root@pve03:/var/lib# pveceph purge ^[[A ^[[Aunable to get monitor info from DNS SRV with service name: ceph-mon 2021-10-21T22:58:30.542+0200 7f5b48667280 -1 failed for service _ceph-mon._tcp 2021-10-21T22:58:30.542+0200 7f5b48667280 -1 monclient: get_monmap_and_config cannot identify monitors to...
  16. A

    pveceph purge wipes entire ceph cluster, not just a host.

    After fighting with PURGING ceph.... eventually I could install it and configure it on a node... and surprise surprise... the proxmox way! I've had it with this bs.
  17. A

    pveceph purge wipes entire ceph cluster, not just a host.

    Now I have to go through this https://forum.proxmox.com/threads/sda-has-a-holder.97771/#post-423005 For every single OSD! ANOYED!
  18. A

    pveceph purge wipes entire ceph cluster, not just a host.

    WHYYYYYYYYYYYYYYYYYYY...... pveceph purge should have taken care of this!!!!!! rm -rf /var/lib/ceph/ rm: cannot remove '/var/lib/ceph/osd/ceph-13': Device or resource busy rm: cannot remove '/var/lib/ceph/osd/ceph-9': Device or resource busy rm: cannot remove '/var/lib/ceph/osd/ceph-18': Device...
  19. A

    pveceph purge wipes entire ceph cluster, not just a host.

    Jep, proxmox install screwed up. pveceph purge does not even do it's job at cleaning up! Hillarious really! Unable to create a new ceph cluster.... reinstall again again again... And the documentation on how to fix this is ofc also lacking.
  20. A

    Removing Ceph Completely

    Nah, pveceph purge just wipes the ceph cluster onall servers if run from one. Which is kinda epic since the documentation does not state this. It even does a pisspoor job at remove ceph. Leaving you with a broken installation where once again you have to Proxmox-Tinker your way to a solution...