Recent content by Magneto

  1. M

    2FA through WAN but not LAN

    I am running port forwarding through PFSense and only allow my local IP address through.
  2. M

    Proxmox Disk Installation Requirements

    I have some 4x 3.5" HDD chassis so the only way to properly utilize them is to use USB SSD drives.
  3. M

    Proxmox Disk Installation Requirements

    So, are you implying that when system-journal and journal entries are written to a remote syslog server, one could use USB drives, especially USB SSD drives?
  4. M

    2FA through WAN but not LAN

    Did you ever get this working?
  5. M

    Proxmox Disk Installation Requirements

    Surely this wouldn't be a problem if the syslog and other logging is done on a rsyslog server?
  6. M

    Getting rid of watchdog emergency node reboot

    This is rather concerning. How does one setup high availability for VM's, to auto restart when a host node fails, of HA breaks the whole cluster?
  7. M

    Reduced data availability: 40 pgs inactive, 42 pgs incomplete

    ceph osd df tree: root@PVE2:~# ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 29.71176 - 16 TiB 6.9 TiB 6.9 TiB 91 MiB 31 GiB 8.9 TiB 0 0 - root...
  8. M

    Reduced data availability: 40 pgs inactive, 42 pgs incomplete

    root@PVE1:~# ceph health detail HEALTH_WARN Reduced data availability: 40 pgs inactive, 42 pgs incomplete; Degraded data redundancy: 278376/3351945 objects degraded (8.305%), 35 pgs degraded, 36 pgs undersized; 34 slow ops, oldest one blocked for 13483 sec, daemons...
  9. M

    Reduced data availability: 40 pgs inactive, 42 pgs incomplete

    In a 5 node cluster, I had to replace some failed SSD's and now the CEPH cluster is stuck with "Reduced data availability: 40 pgs inactive, 42 pgs incomplete" Reduced data availability: 40 pgs inactive, 42 pgs incomplete pg 2.57 is incomplete, acting [1,35,14] (reducing pool CephFS_data...
  10. M

    New all flash Proxmox Ceph Installation

    As matter of interest, did you partition your drives? And what were your findings?
  11. M

    VM cloning is slow

    please explain, what is a linked clone?
  12. M

    shared WAL between CEPH OSD's?

    Do I need to use one WAL per OSD if I use spinning disks?
  13. M

    bad ceph performance on SSD

    Did you ever get to the bottom of this?
  14. M

    Multiple passthrough disk to VM

    How exactly does one passthrough a SSD from the host node to a VM?
  15. M

    shared WAL between CEPH OSD's?

    What would happen if the WAL disk fails?