Do I need to undo the "linux_raid_member" on two of my NVME drives?

Kresp

New Member
Nov 23, 2025
3
0
1
I am brand new to Proxmox. I have put together a server using parts I had laying around. I did buy some NVME drives and RAM. It is an Azrock X299 Taichi Motherboard. I have Quad Channel Memory going at 2400 with 4-32GB sticks of RAM for a total of 128 GB of RAM. I did purchase and install in PCIe slot #1 the ASUS Hyper m.2 X16 GEN5 Card. I have my 4 Sabrent Rocket 4 PLUS 2TB NVMe SSD drives mounted on this card. I set PCIe slot #1 to (x4x4x4x4) in the BIOS to bifurcate the slot into 4 sets of 4 lanes each to handle the 4 NVMe SSD drives. All of that is working perfectly! I see the 4 NVMe drives in the BIOS and I also have a 1TB Samsung 860 QVO SSD drive plugged into a SATA port. The Proxmox was installed on the Samsung drive. My plan was to use the 4 Sabrent Rocket drives for 8TB of storage space. I was getting ready to set them up under ZFS but when I look at my disks in the Proxmox screen I am seeing that 2 of the Sabrent Rockets are allocated under "Usage" as a "linux_raid_member". This is concerning to me because I never set them up as a RAID. But I don't want to screw things up. Can I safely ignore this and just allocate all 4 drives to my ZFS storage space? Or do I need to somehow undo the RAID? And if so, how? Thanks in advance for any help. Kresp.
 

Attachments

  • ProxMox Drives.JPG
    ProxMox Drives.JPG
    98.4 KB · Views: 3
I did want to mention that I didn't do anything in the BIOS to indicate that any of the drives should be set up as a RAID. And all I did with the install was to run the script that commented out the ENTERPRISE repositories. I didn't do anything else. So I was surprised to see those two drives with the "linux_raid_member" comment under Usage. Also just in case it matters, I have an i7-7820X CPU installed in my motherboard. It has 28 PCIe lanes. I know I could install a later CPU but that is one I happened to have laying around. I'm going for a budget build at this point. Anyway, I may proceed with trying to set up the 4 drives into a storage solution. I'm just trying to decide if I should try and undo the RAID at the Ubuntu Command Line level first?
 
When I run the "lsblk" command in the ">_ Shell" from inside Proxmox I get the following. I also ran the "df -h" command. Well, I decided to go back to my "Disks" menu and I chose to WIPE each one. These were used M.2 drives and I figured they might be reporting what was on them. So the "ext4" and "linux_raid_member" disappeared after I wiped each disk and they all say "No" under "usage". So I then went to ZFS and it appears all 4 disks are available to use in creating my storage pool. So I think at the moment I have things figured out. Kresp.
 

Attachments

  • DrivesInLinux.JPG
    DrivesInLinux.JPG
    43.7 KB · Views: 6
  • df-hCommand.JPG
    df-hCommand.JPG
    40.4 KB · Views: 6
  • AfterWIPEdisk.JPG
    AfterWIPEdisk.JPG
    101.9 KB · Views: 5
  • CreateZFS.JPG
    CreateZFS.JPG
    102.3 KB · Views: 6
Hello.
Mind that these NVME disks look heavily used. The "Wearout" column shows 91%, 100%, 100%, 100%.