degraded

  1. I

    [SOLVED] Ceph hang in Degraded data redundancy

    flow: 1 servers had reboot due to power maintenance, 2 (after the reboot) i noticed one server had bad clock sync - fixing the issue and another reboot solved it) the 3. after time sync fixed cluster started to load and rebalance, 4 it hang at error state (data looks ok and everything stable and...
  2. B

    rpool DEGRADED ('label is missing or invalid' - 'part of active pool' error when trying to replace)

    Hello, I have a PBS 2.1-5 which shows: root@pbs:/# zpool status pool: rpool state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action...
  3. S

    [SOLVED] Root zpool degradated

    Hi guys, we have a small server at home running Proxmox 7.4-16 with root on a mirror zfs pool with two SSD. One SSD died resulting in a degradated zpool, server anyway is still running and can boot. I've readed the Proxmox ZFS documentation about how to restore the mirror but we have some...
  4. C

    No Email or UI indication of zpool UNAVAIL warning

    Hello, I just installed Proxmox VE 8.0.3 on a HP Microserver Gen 8. I had to do some gyrations to prevent the boot volume consuming all available disk as a mirror. It was all pretty straight forward in the end. I have emails indicating the completion of resilvering from install drive to my...
  5. L

    [SOLVED] Meaning of DEGRADED state on ZFS pool

    Hey y'all, So I recently set up proxmox on a r720xd and run it as a secondary node in my proxmox cluster. It has 10 1.2TB SAS drives on it (ST1200MM0108) that i have been running in raidz2 and i use a seperate SSD for the host (which runs a NFS share for the zfs pool) and the single VM i run on...
  6. G

    Issue moving containers from LXC to Proxmox

    I am new to Proxmox and this is my first time posting, but I will do my best to provide as much information as I can. I have been running a plain Debian server for a number of years now but decided that I wanted to give Proxmox a try for it's additional features and ease of management...
  7. J

    Zpool Gets Suspended and All Disks Become Unavailable

    Proxmox VE Version: 7.1-12 Server 1 Hardware: Intel i5 quad core 3rd gen, dual 120gb ssd in zfs raid1 boot and root disks 3, 2tb hdd in raidz1, 2 seagates and one HGST 4, 1tb hdd in raidz1, Inside a four disk ProBox usb3 enclosure Server 2 Hardware intel i5 dual core 3rd gen laptop cpu...
  8. J

    [SOLVED] Systemctl status 'degraded' after using mkinitcpio to re-generate initramfs

    I need help to get my server working right, not sure what caused the issue. I think is related to the steps I following when I was trying to make GPU Passthrough to a VM, I had trouble to make it work, so I tried to bind vfio-pci via device ID, then I was trying to load vfio-pci early using...
  9. W

    raidz-1 degraded

    I found a raidz-1 degraded issue. I removed the problem disk and checked it on another computer, there is no problem with the disk. Tell me how to solve the problem?
  10. B

    Degraded CPU performance inside LXC

    Hello everyone I am new to Proxmox, coming from a VMware / Hyper-V background. During evaluation tests I came accross the following issue: When running a certain workload in a LXC container I get degraded CPU performance compared against a QEMU / Hyper-V VM. After some research / benchmarking...
  11. A

    [SOLVED] Replace SSD Raid 1

    Hello all My proxmox VE version was 6.3-6 and running as member cluster, i set RAID 1 on my node and when i see status SSD on proxmox interface the health was DEGRADED and just detect 1 SSD on Disk Section, so i think one of my ssd was broken (the SN ssd not showed up on disks interface...
  12. B

    ZFS Pool is degrading alternately between two disks

    Hi there, I got a degraded status for the second disk of a pool, checked smart status and cleared it. A day later I got the status for the first disk: ZFS has finished a scrub: eid: 63 class: scrub_finish host: pve time: 2020-05-18 18:10:07+0200 pool: DiskPool state: DEGRADED status: One or...
  13. L

    Ceph 75% degraded with only one host down of 4

    Hi all i am struggling to find the reason why my ceph cluster goes into 75% degraded (as seen in the screenshot above) when i reboot just one node. The 4 node cluster is new, with no Vm or container so the used space is 0. Each of the node contains an equal number of SSD OSDs(6 x 465gb)...
  14. P

    [SOLVED] PVE 5 Live migration downtime degradation (2-4 sec)

    The problem found in thread https://forum.proxmox.com/threads/slow-livemigration-performance-since-5-0.35522/, but there discussed live migration with local disks. To not hijack that thread, I open new one. In PVE 5.0 after live migration with shared storage, VM hangs for 2-4 seconds. It can be...
  15. J

    [SOLVED] Ceph Help...

    I set up a test environment and started over a few times. What's strange is each time I restart the Ceph network, even after writing 0's to all the osd's to make sure things were cleared out - I end up with: HEALTH_WARN 1 pgs degraded; 1 pigs stuck degraded; 64 pgs stuck unclean; 1pgs stuck...