Recent content by Magneto

  1. M

    Getting rid of watchdog emergency node reboot

    This is rather concerning. How does one setup high availability for VM's, to auto restart when a host node fails, of HA breaks the whole cluster?
  2. M

    Reduced data availability: 40 pgs inactive, 42 pgs incomplete

    ceph osd df tree: root@PVE2:~# ceph osd df tree ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -1 29.71176 - 16 TiB 6.9 TiB 6.9 TiB 91 MiB 31 GiB 8.9 TiB 0 0 - root...
  3. M

    Reduced data availability: 40 pgs inactive, 42 pgs incomplete

    root@PVE1:~# ceph health detail HEALTH_WARN Reduced data availability: 40 pgs inactive, 42 pgs incomplete; Degraded data redundancy: 278376/3351945 objects degraded (8.305%), 35 pgs degraded, 36 pgs undersized; 34 slow ops, oldest one blocked for 13483 sec, daemons...
  4. M

    Reduced data availability: 40 pgs inactive, 42 pgs incomplete

    In a 5 node cluster, I had to replace some failed SSD's and now the CEPH cluster is stuck with "Reduced data availability: 40 pgs inactive, 42 pgs incomplete" Reduced data availability: 40 pgs inactive, 42 pgs incomplete pg 2.57 is incomplete, acting [1,35,14] (reducing pool CephFS_data...
  5. M

    New all flash Proxmox Ceph Installation

    As matter of interest, did you partition your drives? And what were your findings?
  6. M

    VM cloning is slow

    please explain, what is a linked clone?
  7. M

    shared WAL between CEPH OSD's?

    Do I need to use one WAL per OSD if I use spinning disks?
  8. M

    bad ceph performance on SSD

    Did you ever get to the bottom of this?
  9. M

    Multiple passthrough disk to VM

    How exactly does one passthrough a SSD from the host node to a VM?
  10. M

    shared WAL between CEPH OSD's?

    What would happen if the WAL disk fails?
  11. M

    Question regarding network bond config

    So how does one get 2Gb speed across 2 NIC's?
  12. M

    Hardware compatibility with DELL server

    Which RAID cards do you use? Some Dell RAID cards don't offer HBA mode
  13. M

    shared WAL between CEPH OSD's?

    Is it possible to share a CEPH WAL between all the OSD's, instead of having to partition the WAL? If I have 12 drives, I have to create 12 equal partition on the WAL, and assign each partition to an OSD. Is there a better way to assign the WAL?
  14. M

    ZFS and Ceph on same cluster

    Is it possible to move a VM between CEPH and ZFS in a mixed environment like this?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!