PLEASE READ THIS TUTORIAL FULLY BEFORE BEGINNING. MAKE SURE YOUR DATA IS BACKED UP TO ANOTHER MACHINE BEFORE BEGINNING! IF YOU DO NOT UNDERSTAND SOMETHING BEFORE STARTING, PLEASE POST & ASK!
I am new here but have read several threads about people wanting to use softraid. I have also read & considered the downsides by the proxmox ve team.
A couple of assumptions:
* This HOWTO assumes you have a working proxmox ve installation. If you do not, solve your current problems first & then you can try this tutorial.
* BACK UP ALL DATA FIRST. Do not attempt this without being able to reinstall if needed. I am not responsible for data loss.
* It is assumed you have some linux know-how. If you do not have some linux (debian/ubuntu) experience beyond proxmox ve, I'd stay away from this tutorial.
* Following this tutorial incorrectly or making a mistake may make your system unusable unbootable. USE AT YOUR OWN RISK.
If you would still like to try this, read below
1: Shut down all your VM's nicely in your proxmox ve gui.
2. SSH as root into your proxmox server:
3. Next we want to run apt-get update
4. install mdadm. Mdadm may ask you some questions, and that is fine. Just chose "all" (the default) when it asks you which arrays to start at boot.
5. Lets create the array The disks for my array are sdb/sdc/sdd/sde/sdf
for each disk, we are going to do "fdisk /dev/sd<something>". First I'm going to do fdisk /dev/sdb. We then list partitions, create a partition and change the type to linux raid autodetect. THE FOLLOWING PROCEDURE IS FOR BLANK DISKS. IF YOU HAVE DATA ON YOUR DISKS YOU NEED TO BACK IT UP OR IT WILL GET ERASED. IF YOU HAVE A CURRENT ARRAY/PV ON YOUR DISKS/RAID IT IS GOING TO GET ERASED. ONLY USE BLANK DISKS FOR THE BELOW STEP.
Do this for each disk.
6. We now have our disks (2 or 4 or in my case 5) ready for array creation.
7. Lets now clear the superblock of old data (the brackets with bcdef does those disks all at once):
8. Now we will create the array:
This array in my cases is 4 disks active, level 10, 1 spare
If you had a raid1 of sdq1 and sdr1 with 1 spare you'd do:
9. We can now see the status of the array:
10. WHILE THE ARRAY IS BUILDING ITSELF, WE CAN WORK WITH IT BUT DON'T REBOOT.
11. Lets see what proxmox ve did for our installation on lvm
* pv's are the physical devices with extents (think 4MB blocks)
* vg's are the volume groups
* lvs are the logical volumes inside a group
* The only thing that holds a filesystem are lv's
12. Lets add the new raid as a disk with available physical extents:
13. Lets now extend the pve volume group so that it can use extents inside of /dev/md0:
14. Next we are going to MOVE the PV's from the non-raided disk to the raid array. This may take a while depending on the size of your original disk & new array. It will take even longer if your array is still building. The way the PV move works is that it MIRRORS PE (Physical extents -- 4MB blocks) from allocated PE's (in my case on sda2) to new free pv's on a different device (md0). When it has mirrored them completely, it then marks the old PE's as freespace. If you lose power or if your computer gets rebooted, you simply run "pvmove" and it should pick up where it left off.
You will see a status as so:
WARNING: ONCE YOU START PVMOVES DO NOT REBOOT, UNDER ANY CIRCUMSTANCES. IF YOU DO NOT COMPLETE THE update-initramfs described below, your system will be unbootable! There is a way to fix this but it requires a live cd (ubuntu 8.10 64-bit works well).
... To be continued.
I am new here but have read several threads about people wanting to use softraid. I have also read & considered the downsides by the proxmox ve team.
A couple of assumptions:
* This HOWTO assumes you have a working proxmox ve installation. If you do not, solve your current problems first & then you can try this tutorial.
* BACK UP ALL DATA FIRST. Do not attempt this without being able to reinstall if needed. I am not responsible for data loss.
* It is assumed you have some linux know-how. If you do not have some linux (debian/ubuntu) experience beyond proxmox ve, I'd stay away from this tutorial.
* Following this tutorial incorrectly or making a mistake may make your system unusable unbootable. USE AT YOUR OWN RISK.
If you would still like to try this, read below
1: Shut down all your VM's nicely in your proxmox ve gui.
2. SSH as root into your proxmox server:
Code:
rob2@rob:~$ ssh root@vm2
The authenticity of host 'vm2 (192.168.75.29)' can't be established.
RSA key fingerprint is e2:d9:29:e3:06:39:1b:ac:af:0b:14:2f:42:56:0e:24.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'vm2,192.168.75.29' (RSA) to the list of known hosts.
root@vm2's password:
Last login: Thu Feb 19 05:55:55 2009 from 192.168.75.250
Linux vm2 2.6.24-2-pve #1 SMP PREEMPT Wed Jan 14 11:32:49 CET 2009 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Code:
vm2:~# apt-get update
Get:1 http://security.debian.org etch/updates Release.gpg [189B]
Get:2 http://security.debian.org etch/updates Release [37.6kB]
Get:3 http://ftp.us.debian.org etch Release.gpg [386B]
Get:4 http://download.proxmox.com etch Release.gpg [189B]
Get:5 http://ftp.us.debian.org etch Release [58.2kB]
Get:6 http://download.proxmox.com etch Release [1882B]
Get:7 http://security.debian.org etch/updates/main Packages [299kB]
Ign http://download.proxmox.com etch/pve Packages
Get:8 http://ftp.us.debian.org etch/main Packages [4207kB]
Get:9 http://download.proxmox.com etch/pve Packages [4612B]
Fetched 4609kB in 5s (809kB/s)
Reading package lists... Done
Code:
vm2:~# apt-get install mdadm lvm2
Reading package lists... Done
Building dependency tree... Done
lvm2 is already the newest version.
The following NEW packages will be installed:
mdadm
0 upgraded, 1 newly installed, 0 to remove and 32 not upgraded.
Need to get 228kB of archives.
After unpacking 672kB of additional disk space will be used.
Get:1 http://ftp.us.debian.org etch/main mdadm 2.5.6-9 [228kB]
Fetched 228kB in 0s (586kB/s)
Preconfiguring packages ...
Selecting previously deselected package mdadm.
(Reading database ... 24508 files and directories currently installed.)
Unpacking mdadm (from .../mdadm_2.5.6-9_amd64.deb) ...
Setting up mdadm (2.5.6-9) ...
Generating mdadm.conf... done.
Starting MD monitoring service: mdadm --monitor.
Assembling MD arrays...failed (no arrays found in config file or automatically).
for each disk, we are going to do "fdisk /dev/sd<something>". First I'm going to do fdisk /dev/sdb. We then list partitions, create a partition and change the type to linux raid autodetect. THE FOLLOWING PROCEDURE IS FOR BLANK DISKS. IF YOU HAVE DATA ON YOUR DISKS YOU NEED TO BACK IT UP OR IT WILL GET ERASED. IF YOU HAVE A CURRENT ARRAY/PV ON YOUR DISKS/RAID IT IS GOING TO GET ERASED. ONLY USE BLANK DISKS FOR THE BELOW STEP.
Code:
vm2:~# fdisk /dev/sdb
The number of cylinders for this disk is set to 48641.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdb: 400.0 GB, 400088457216 bytes
255 heads, 63 sectors/track, 48641 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-48641, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-48641, default 48641):
Using default value 48641
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
6. We now have our disks (2 or 4 or in my case 5) ready for array creation.
7. Lets now clear the superblock of old data (the brackets with bcdef does those disks all at once):
Code:
vm2:~# mdadm --zero-superblock /dev/sd[bcdef]1
Code:
vm2:~# mdadm -C /dev/md0 -n 4 -l 10 -x 1 /dev/sd[bcdef]1
mdadm: array /dev/md0 started.
If you had a raid1 of sdq1 and sdr1 with 1 spare you'd do:
Code:
mdadm -C /dev/md0 -n 2 -l 1 -x 1 /dev/sd[qr]1
Code:
vm2:~# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Feb 19 07:55:53 2009
Raid Level : raid10
Array Size : 781422592 (745.22 GiB 800.18 GB)
Device Size : 390711296 (372.61 GiB 400.09 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Feb 19 07:55:53 2009
State : clean, resyncing
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : near=2, far=1
Chunk Size : 64K
Rebuild Status : 1% complete
UUID : c87fd96c:c0da8cd3:85d07652:83e7e321 (local to host vm2)
Events : 0.1
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
4 8 80 - spare /dev/sdf
11. Lets see what proxmox ve did for our installation on lvm
* pv's are the physical devices with extents (think 4MB blocks)
* vg's are the volume groups
* lvs are the logical volumes inside a group
* The only thing that holds a filesystem are lv's
Code:
vm2:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 pve lvm2 a- 278.97G 3.99G
vm2:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 278.97G 3.99G
vm2:~# lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
data pve -wi-ao 201.23G
root pve -wi-ao 69.75G
swap pve -wi-ao 4.00G
Code:
vm2:~# pvcreate /dev/md0
Physical volume "/dev/md0" successfully created
vm2:~# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 lvm2 -- 745.22G 745.22G
/dev/sda2 pve lvm2 a- 278.97G 3.99G
Code:
vm2:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 278.97G 3.99G
vm2:~# vgextend pve /dev/md0
Volume group "pve" successfully extended
vm2:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 2 3 0 wz--n- 1.00T 749.21G
You will see a status as so:
WARNING: ONCE YOU START PVMOVES DO NOT REBOOT, UNDER ANY CIRCUMSTANCES. IF YOU DO NOT COMPLETE THE update-initramfs described below, your system will be unbootable! There is a way to fix this but it requires a live cd (ubuntu 8.10 64-bit works well).
Code:
vm2:~# pvmove -n /dev/pve/data /dev/sda2 /dev/md0
/dev/sda2: Moved: 0.3%
/dev/sda2: Moved: 0.8%
/dev/sda2: Moved: 1.2%
/dev/sda2: Moved: 1.6%
/dev/sda2: Moved: 2.0%
/dev/sda2: Moved: 2.4%
/dev/sda2: Moved: 2.8%
/dev/sda2: Moved: 3.2%
/dev/sda2: Moved: 3.6%
/dev/sda2: Moved: 4.0%
/dev/sda2: Moved: 4.4%
/dev/sda2: Moved: 4.8%
/dev/sda2: Moved: 5.2%
/dev/sda2: Moved: 5.6%
/dev/sda2: Moved: 6.0%
/dev/sda2: Moved: 6.4%
.
.
.
.
/dev/sda2: Moved: 100.0%
Last edited: