help destroy a raid on proxmox ovh

guilloking

New Member
Oct 19, 2022
18
1
3
Good dear group.

I have this problem and I hope you can help me, I have a problem.

I have a proxmox server mounted in ovh with 4 2 tera disks which by default are installed in raid 1.

I want to have the 4 disks to have the maximum possible storage, destroying the raid1.

The procedure that I carried out was the following.

umount, mdadm stop, mdadm destroy, mdadm zero superblock.

In the configuration I have a vg group called vg and it contains the /dev/md5

The disk array contains the 4 disks mentioned, sda4, sdb4, sdc4, sdd4.

What is synchronizing is the partition /var/lib/vz

What would be the correct procedure, because I have already carried out the procedure twice and the proxmox does not start, it is damaged.

attached

partition.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 511M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part
│ └─md2 9:2 0 1022M 0 raid1 /boot
├─sda3 8:3 0 20G 0 part
│ └─md3 9:3 0 20G 0 raid1 /
├─sda4 8:4 0 20G 0 part [SWAP]
├─sda5 8:5 0 1.8T 0 part
│ └─md5 9:5 0 1.8T 0 raid1
│ └─vg-data 253:0 0 1.8T 0 lvm /var/lib/vz
└─sda6 8:6 0 2M 0 part
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 511M 0 part
├─sdb2 8:18 0 1G 0 part
│ └─md2 9:2 0 1022M 0 raid1 /boot
├─sdb3 8:19 0 20G 0 part
│ └─md3 9:3 0 20G 0 raid1 /
├─sdb4 8:20 0 20G 0 part [SWAP]
└─sdb5 8:21 0 1.8T 0 part
└─md5 9:5 0 1.8T 0 raid1
└─vg-data 253:0 0 1.8T 0 lvm /var/lib/vz
sdc 8:32 0 1.8T 0 disk
├─sdc1 8:33 0 511M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md2 9:2 0 1022M 0 raid1 /boot
├─sdc3 8:35 0 20G 0 part
│ └─md3 9:3 0 20G 0 raid1 /
├─sdc4 8:36 0 20G 0 part [SWAP]
└─sdc5 8:37 0 1.8T 0 part
└─md5 9:5 0 1.8T 0 raid1
└─vg-data 253:0 0 1.8T 0 lvm /var/lib/vz
sdd 8:48 0 1.8T 0 disk
├─sdd1 8:49 0 511M 0 part
├─sdd2 8:50 0 1G 0 part
│ └─md2 9:2 0 1022M 0 raid1 /boot
├─sdd3 8:51 0 20G 0 part
│ └─md3 9:3 0 20G 0 raid1 /
├─sdd4 8:52 0 20G 0 part [SWAP]
└─sdd5 8:53 0 1.8T 0 part
└─md5 9:5 0 1.8T 0 raid1
└─vg-data 253:0 0 1.8T 0 lvm /var/lib/vz

mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md5 : active raid1 sda5[0] sdd5[3] sdb5[1] sdc5[2]
1909864448 blocks super 1.2 [4/4] [UUUU]
bitmap: 0/15 pages [0KB], 65536KB chunk

md3 : active raid1 sdc3[2] sdb3[1] sdd3[3] sda3[0]
20954112 blocks super 1.2 [4/4] [UUUU]

md2 : active raid1 sdb2[1] sdd2[3] sdc2[2] sda2[0]
1046528 blocks super 1.2 [4/4] [UUUU]

detail
/dev/md5:
Version : 1.2
Creation Time : Tue Jul 11 21:51:29 2023
Raid Level : raid1
Array Size : 1909864448 (1821.39 GiB 1955.70 GB)
Used Dev Size : 1909864448 (1821.39 GiB 1955.70 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Jul 12 14:41:26 2023
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Consistency Policy : bitmap

Name : md5
UUID :
Events : 2

Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
2 8 37 2 active sync /dev/sdc5
3 8 53 3 active sync /dev/sdd5

unused devices: <none>

I hope you can help me.

thx,
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!