A DegradedArray event had been detected on md device /dev/md0.

Zyzzma

New Member
Sep 25, 2022
1
0
1
I added a drive to pass through to my VM. I promptly removed it in the same day (no commands were added) and from the next day onwards i get this email every day:

A DegradedArray event had been detected on md device /dev/md0.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 nvme0n1[0]
488254464 blocks super 1.2 [2/1] [U_]
bitmap: 4/4 pages [16KB], 65536KB chunk

unused devices: <none>

running mdadm -D /dev/md0 i get:
/dev/md0:
Version : 1.2
Creation Time : Wed Oct 26 18:21:24 2022
Raid Level : raid1
Array Size : 488254464 (465.64 GiB 499.97 GB)
Used Dev Size : 488254464 (465.64 GiB 499.97 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Fri Jan 27 18:00:08 2023
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : bitmap

Name : pve1:0 (local to host pve1)
UUID : f286728c:a7c22c05:768c3b80:0c65aa2e
Events : 3602031

Number Major Minor RaidDevice State
0 259 2 0 active sync /dev/nvme0n1
- 0 0 1 removed

Any help?