Recent content by Jeff Wadsworth

  1. J

    proxmox ve 6.3.6 dies with kernel panic once a day

    It would be great to see your hardware for the hypervisor as well. Is this your computer? https://origin-www.asus.com/Tower-PCs/M51AC/
  2. J

    Proxmox and Ceph - Module 'volumes' has failed dependency: No module named 'distutils.util'

    This fixed my issue after running apt install python3-distutils on the node. Thanks.
  3. J

    Clearing Ceph OSD partition

    Or even better: gdisk /dev/sd* * for drive letter /dev/sdf for example 'x' for extra commands 'z' for zap
  4. J

    Difficulty transferring ISOs to Proxmox

    The MEMTEST86 failure to note the bad ram is sort of concerning. It would be interesting to know what the issue with the stick/slot was.
  5. J

    Disable Suspicious Links :rolleyes:

    He/She might be referring to the email product.
  6. J

    Using Nvidia Tesla K20X with Proxmox

    https://pve.proxmox.com/wiki/Pci_passthrough Have way down in the middle. machine:q35
  7. J

    [SOLVED] How to set up a vm using on a RBD storage

    Boot the new VM with a live media and investigate the restored disk, mainly the boot sector and partition, compare its content with a working VM.
  8. J

    Ceph purge leave some traces behind, can't reconfigure cluster

    You could try the steps here, but no guarantee. https://forum.proxmox.com/threads/reinstall-ceph-on-proxmox-6.57691/ Also, for recovering the entire proxmox server to an earlier stable state: https://forum.proxmox.com/threads/using-zfs-snapshots-on-rpool-root-pve-1.27530/
  9. J

    Reinstall CEPH on Proxmox 6

    Hello, just in case you have to reinstall, you may find this "rollback" setup useful for getting back to your previous state. https://forum.proxmox.com/threads/using-zfs-snapshots-on-rpool-root-pve-1.27530/
  10. J

    [SOLVED] Shutting down any node makes VMs unavailable

    I am running some tests on a fresh install of 5.4. So far, the VM's work fine with the loss of 1 node in a 3 node cluster with ceph (3 OSD's per node) Using osd pool 3/1 for the test. If it was 3/2, even one node going offline would halt your VM's. Is your VM using the ceph storage for its...
  11. J

    [SOLVED] Shutting down any node makes VMs unavailable

    What is your OSD pool default size? The min? If you shut off a node, what is the status of Ceph?
  12. J

    Backups getting slower, extremely slow already

    Manny, are you still using USB drives for your backups? I believe this is a very bad idea and may lead to file corruption issues. There are some threads on here directly related to this issue.