HOWTO: Move installation from single disk to raid

Robstarusa

Renowned Member
Feb 19, 2009
89
4
73
PLEASE READ THIS TUTORIAL FULLY BEFORE BEGINNING. MAKE SURE YOUR DATA IS BACKED UP TO ANOTHER MACHINE BEFORE BEGINNING! IF YOU DO NOT UNDERSTAND SOMETHING BEFORE STARTING, PLEASE POST & ASK!

I am new here but have read several threads about people wanting to use softraid. I have also read & considered the downsides by the proxmox ve team.

A couple of assumptions:

* This HOWTO assumes you have a working proxmox ve installation. If you do not, solve your current problems first & then you can try this tutorial.

* BACK UP ALL DATA FIRST. Do not attempt this without being able to reinstall if needed. I am not responsible for data loss.

* It is assumed you have some linux know-how. If you do not have some linux (debian/ubuntu) experience beyond proxmox ve, I'd stay away from this tutorial.

* Following this tutorial incorrectly or making a mistake may make your system unusable unbootable. USE AT YOUR OWN RISK.

If you would still like to try this, read below

1: Shut down all your VM's nicely in your proxmox ve gui.
2. SSH as root into your proxmox server:
Code:
rob2@rob:~$ ssh root@vm2
The authenticity of host 'vm2 (192.168.75.29)' can't be established.
RSA key fingerprint is e2:d9:29:e3:06:39:1b:ac:af:0b:14:2f:42:56:0e:24.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'vm2,192.168.75.29' (RSA) to the list of known hosts.
root@vm2's password: 
Last login: Thu Feb 19 05:55:55 2009 from 192.168.75.250
Linux vm2 2.6.24-2-pve #1 SMP PREEMPT Wed Jan 14 11:32:49 CET 2009 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
3. Next we want to run apt-get update
Code:
vm2:~# apt-get update
Get:1 http://security.debian.org etch/updates Release.gpg [189B]
Get:2 http://security.debian.org etch/updates Release [37.6kB]
Get:3 http://ftp.us.debian.org etch Release.gpg [386B]                   
Get:4 http://download.proxmox.com etch Release.gpg [189B]                
Get:5 http://ftp.us.debian.org etch Release [58.2kB] 
Get:6 http://download.proxmox.com etch Release [1882B]                         
Get:7 http://security.debian.org etch/updates/main Packages [299kB]            
Ign http://download.proxmox.com etch/pve Packages                              
Get:8 http://ftp.us.debian.org etch/main Packages [4207kB]
Get:9 http://download.proxmox.com etch/pve Packages [4612B]                    
Fetched 4609kB in 5s (809kB/s)                                                 
Reading package lists... Done
4. install mdadm. Mdadm may ask you some questions, and that is fine. Just chose "all" (the default) when it asks you which arrays to start at boot.
Code:
vm2:~# apt-get install mdadm lvm2
Reading package lists... Done
Building dependency tree... Done
lvm2 is already the newest version.
The following NEW packages will be installed:
  mdadm
0 upgraded, 1 newly installed, 0 to remove and 32 not upgraded.
Need to get 228kB of archives.
After unpacking 672kB of additional disk space will be used.
Get:1 http://ftp.us.debian.org etch/main mdadm 2.5.6-9 [228kB]
Fetched 228kB in 0s (586kB/s)
Preconfiguring packages ...
Selecting previously deselected package mdadm.
(Reading database ... 24508 files and directories currently installed.)
Unpacking mdadm (from .../mdadm_2.5.6-9_amd64.deb) ...
Setting up mdadm (2.5.6-9) ...
Generating mdadm.conf... done.
Starting MD monitoring service: mdadm --monitor.
Assembling MD arrays...failed (no arrays found in config file or automatically).
5. Lets create the array The disks for my array are sdb/sdc/sdd/sde/sdf
for each disk, we are going to do "fdisk /dev/sd<something>". First I'm going to do fdisk /dev/sdb. We then list partitions, create a partition and change the type to linux raid autodetect. THE FOLLOWING PROCEDURE IS FOR BLANK DISKS. IF YOU HAVE DATA ON YOUR DISKS YOU NEED TO BACK IT UP OR IT WILL GET ERASED. IF YOU HAVE A CURRENT ARRAY/PV ON YOUR DISKS/RAID IT IS GOING TO GET ERASED. ONLY USE BLANK DISKS FOR THE BELOW STEP.
Code:
vm2:~# fdisk /dev/sdb

The number of cylinders for this disk is set to 48641.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 400.0 GB, 400088457216 bytes
255 heads, 63 sectors/track, 48641 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-48641, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-48641, default 48641): 
Using default value 48641

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
Do this for each disk.
6. We now have our disks (2 or 4 or in my case 5) ready for array creation.
7. Lets now clear the superblock of old data (the brackets with bcdef does those disks all at once):
Code:
vm2:~# mdadm --zero-superblock /dev/sd[bcdef]1
8. Now we will create the array:
Code:
vm2:~# mdadm -C /dev/md0 -n 4 -l 10 -x 1 /dev/sd[bcdef]1
mdadm: array /dev/md0 started.
This array in my cases is 4 disks active, level 10, 1 spare
If you had a raid1 of sdq1 and sdr1 with 1 spare you'd do:
Code:
mdadm -C /dev/md0 -n 2 -l 1 -x 1 /dev/sd[qr]1
9. We can now see the status of the array:
Code:
vm2:~# mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Thu Feb 19 07:55:53 2009
     Raid Level : raid10
     Array Size : 781422592 (745.22 GiB 800.18 GB)
    Device Size : 390711296 (372.61 GiB 400.09 GB)
   Raid Devices : 4
  Total Devices : 5
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Feb 19 07:55:53 2009
          State : clean, resyncing
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : near=2, far=1
     Chunk Size : 64K

 Rebuild Status : 1% complete

           UUID : c87fd96c:c0da8cd3:85d07652:83e7e321 (local to host vm2)
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

       4       8       80        -      spare   /dev/sdf
10. WHILE THE ARRAY IS BUILDING ITSELF, WE CAN WORK WITH IT BUT DON'T REBOOT.
11. Lets see what proxmox ve did for our installation on lvm
* pv's are the physical devices with extents (think 4MB blocks)
* vg's are the volume groups
* lvs are the logical volumes inside a group
* The only thing that holds a filesystem are lv's
Code:
vm2:~# pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  pve  lvm2 a-   278.97G 3.99G
vm2:~# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  pve    1   3   0 wz--n- 278.97G 3.99G
vm2:~# lvs
  LV   VG   Attr   LSize   Origin Snap%  Move Log Copy% 
  data pve  -wi-ao 201.23G                              
  root pve  -wi-ao  69.75G                              
  swap pve  -wi-ao   4.00G
12. Lets add the new raid as a disk with available physical extents:
Code:
vm2:~# pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created
vm2:~# pvs
  PV         VG   Fmt  Attr PSize   PFree  
  /dev/md0        lvm2 --   745.22G 745.22G
  /dev/sda2  pve  lvm2 a-   278.97G   3.99G
13. Lets now extend the pve volume group so that it can use extents inside of /dev/md0:
Code:
vm2:~# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  pve    1   3   0 wz--n- 278.97G 3.99G
vm2:~# vgextend pve /dev/md0
  Volume group "pve" successfully extended
vm2:~# vgs
  VG   #PV #LV #SN Attr   VSize VFree  
  pve    2   3   0 wz--n- 1.00T 749.21G
14. Next we are going to MOVE the PV's from the non-raided disk to the raid array. This may take a while depending on the size of your original disk & new array. It will take even longer if your array is still building. The way the PV move works is that it MIRRORS PE (Physical extents -- 4MB blocks) from allocated PE's (in my case on sda2) to new free pv's on a different device (md0). When it has mirrored them completely, it then marks the old PE's as freespace. If you lose power or if your computer gets rebooted, you simply run "pvmove" and it should pick up where it left off.
You will see a status as so:

WARNING: ONCE YOU START PVMOVES DO NOT REBOOT, UNDER ANY CIRCUMSTANCES. IF YOU DO NOT COMPLETE THE update-initramfs described below, your system will be unbootable! There is a way to fix this but it requires a live cd (ubuntu 8.10 64-bit works well).
Code:
vm2:~# pvmove -n /dev/pve/data /dev/sda2 /dev/md0
  /dev/sda2: Moved: 0.3%
  /dev/sda2: Moved: 0.8%
  /dev/sda2: Moved: 1.2%
  /dev/sda2: Moved: 1.6%
  /dev/sda2: Moved: 2.0%
  /dev/sda2: Moved: 2.4%
  /dev/sda2: Moved: 2.8%
  /dev/sda2: Moved: 3.2%
  /dev/sda2: Moved: 3.6%
  /dev/sda2: Moved: 4.0%
  /dev/sda2: Moved: 4.4%
  /dev/sda2: Moved: 4.8%
  /dev/sda2: Moved: 5.2%
  /dev/sda2: Moved: 5.6%
  /dev/sda2: Moved: 6.0%
  /dev/sda2: Moved: 6.4%
.
.
.
.
/dev/sda2: Moved: 100.0%
... To be continued.
 
Last edited:
Continued:


15:We will now do this for root & swap as well (swap output is the same. I leave the CLI command as an exercise for the reader).
Code:
vm2:~# pvmove -n /dev/pve/root /dev/sda2 /dev/md0
  /dev/sda2: Moved: 1.2%
  /dev/sda2: Moved: 2.5%
.
.
.
/dev/sda2: Moved: 100.0%
16: Now we need initramfs-tools
Code:
vm2:~# apt-get install initramfs-tools
Reading package lists... Done
Building dependency tree... Done
The following extra packages will be installed:
  busybox klibc-utils libklibc
The following NEW packages will be installed:
  busybox initramfs-tools klibc-utils libklibc
0 upgraded, 4 newly installed, 0 to remove and 32 not upgraded.
Need to get 608kB of archives.
After unpacking 1585kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://ftp.us.debian.org etch/main busybox 1:1.1.3-4 [326kB]
Get:2 http://ftp.us.debian.org etch/main libklibc 1.4.34-2 [45.5kB]
Get:3 http://ftp.us.debian.org etch/main klibc-utils 1.4.34-2 [174kB]
Get:4 http://ftp.us.debian.org etch/main initramfs-tools 0.85i [62.7kB]
Fetched 608kB in 1s (541kB/s)        
Selecting previously deselected package busybox.
(Reading database ... 24603 files and directories currently installed.)
Unpacking busybox (from .../busybox_1%3a1.1.3-4_amd64.deb) ...
Selecting previously deselected package libklibc.
Unpacking libklibc (from .../libklibc_1.4.34-2_amd64.deb) ...
Selecting previously deselected package klibc-utils.
Unpacking klibc-utils (from .../klibc-utils_1.4.34-2_amd64.deb) ...
Selecting previously deselected package initramfs-tools.
Unpacking initramfs-tools (from .../initramfs-tools_0.85i_all.deb) ...
Setting up busybox (1.1.3-4) ...
Setting up libklibc (1.4.34-2) ...
Setting up klibc-utils (1.4.34-2) ...
Setting up initramfs-tools (0.85i) ...
17. Now lets update our /etc/mdadm/mdadm.conf (MAKE SURE YOU USE >> and not > to APPEND to your /etc/mdadm/mdadm.conf instead of overwriting it)
Code:
vm2:~#  mdadm --detail --scan >> /etc/mdadm/mdadm.conf
18. Now lets update our initramfs
Code:
vm2:~# uname -a
Linux vm2 2.6.24-2-pve #1 SMP PREEMPT Wed Jan 14 11:32:49 CET 2009 x86_64 GNU/Linux
vm2:~# update-initramfs -k 2.6.24-2-pve -u
update-initramfs: Generating /boot/initrd.img-2.6.24-2-pve
If you get "/boot/initrd...." has been altered, add the -t switch TO the update-initramfs COMMAND!
19. Now we should be able to reboot.
Unfortunately now I get an error that says "press control-d to continue normal boot" (It can't run fsck on /dev/sda1 -- my /boot partition
The reason for this is that after adding the raid array & re-building initramfs, udev (I think) is relabelling the drives. What was previously /dev/sda on my system is now /dev/sdf. The fix is simple:
* Enter your root password as asked.
* Go into vi (explaining vi use is beyond the scope of this tutorial) and change your /boot to the correct drive letter.
How do you determine the correct drive letter ?
Code:
fdisk -l /dev/sda
Repeat for each drive letter until you find a drive with 2 partitions, one should be extremely small. This is the drive with /boot.

Comments & questions welcome!
Also: if someone knows how to move /boot as well and what is required to get grub working (I have an idea but it is not tested). Please post.
 
Last edited:
Very handy. I've spent a good few hours tonight playing with this, as I want to set up Proxmox on a server with four hard drives (two smaller drives for the OS, and two large drives for VM data), with a RAID-1 configuration. Specifically:

/dev/sda = 80GB system disk (which proxmox is installed onto)
/dev/sdb = 80GB system disk
/dev/sdc = 1TB system disk
/dev/sdd = 1TB system disk

I've come up with a list of commands that will take the default Proxmox installation, on the first disk, and replicate that to the other disk (RAID-1), and then move the VM data to the other pair of disks. The list of commands follows, with some explanation. I haven't explained everything, if you have any questions, please post here.

As with the earlier posts in this thread, DO NOT REBOOT until you have completed all commands, otherwise your system will likely not be bootable at all! This should ideally be run on a fresh system, although you can probably modify an existing system if you have enough spare disk space for copying data between partitions.

From a fresh install of proxmox:

Code:
apt-get update && apt-get install parted mdadm initramfs-tools

fdisk
You should now replicate the partition structure of sda on sdb, but set up RAID partitions (partition type 'fd'). You may need to slightly reduce partition sizes - it is essential that the sizes on sdb match what will be recreated on sda.

Code:
mkdir /oldboot
cp -Rp /boot/* /oldboot/
umount /boot
fdisk /dev/sda (ensure sda1 same size as other disk - delete and recreate if necessary, change to 'fd')
mdadm -C /dev/md0 -n 2 -l 1 /dev/sda1 /dev/sdb1

mkfs.ext3 /dev/md0
nano /etc/fstab (change "/dev/sda1" to "/dev/md0" on the /boot line)
mount /boot
cp -Rp /oldboot/* /boot
The commands above copies everything out of your old /boot partition, unmounts it, sets up a new RAID partition in its place, creates the RAID array, formats it, and puts the old boot files back.

Code:
mdadm -C /dev/md1 -n 1 -l 1 /dev/sdb2 --force
mdadm -C /dev/md2 -n 2 -l 1 /dev/sdc1 /dev/sdd1

mkdir /new-vz
pvcreate /dev/md2
vgcreate vg00 /dev/md2
lvcreate --name data --size 11GB vg00  [change the 11GB as appropriate]
mkfs.ext3 /dev/vg00/data

mount /dev/vg00/data /new-vz
cp -d -R -p /var/lib/vz/* /new-vz

umount /var/lib/vz
umount /new-vz
mount /dev/vg00/data /var/lib/vz
The above commands create two new RAID arrays for the root partition, and for the data partition. It copies the vz data to a temporary location, unmounts the old partition, formats the new one, and copies the data back.

Code:
pvcreate /dev/md1
vgextend pve /dev/md1
lvremove /dev/pve/data
pvmove -n /dev/pve/root /dev/sda2 /dev/md1
pvmove -n /dev/pve/swap /dev/sda2 /dev/md1
These commands create an LVM volume on the root array (which currently only lives on disk 2), extends pve to incorporate this, removes the old data volume (which is on disk 1), and moves root and swap to disk 2.

You should then change /etc/fstab to the following (adapt as appropriate for your system):

Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/vg00/data /var/lib/vz ext3 defaults 0 1
/dev/md0 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
Then remove the old volume from the volume group, and remove the physical partition:

Code:
vgreduce pve /dev/sda2
pvremove /dev/sda2

fdisk /dev/sda [change partition type for sda2 to 'fd' (ensure same size as one on sdb)]
mdadm /dev/md1 --add /dev/sda2
mdadm --grow /dev/md1 --raid-devices=2
Then update mdadm.conf, update the kernel image, and reinstall grub to the boot sector on both system drives:

Code:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

update-initramfs -k 2.6.24-2-pve -u -t

partprobe

grub-install --no-floppy /dev/sda
grub-install --no-floppy /dev/sdb
Finally, extend the root partition to take up all the available space in the pve volume:

Code:
lvextend -L +123GB /dev/pve/root
You can call this with smaller increments as many times as you need (eg, +10GB until you get errors saying there is no space left).

Then what you should normally do is check the partition, but since we are operating on the root partition, this is kind of difficult without moving over some tools and libraries to other partitions. So we'll just skip to the resizing of the partition:

Code:
resize2fs /dev/pve/root
That's all there is to it! You should now be able to reboot - on either drive - and you'll have all your data on one array of drives, and your system files on the other.

Cheers,
 
Very handy. I've spent a good few hours tonight playing with this, as I want to set up Proxmox on a server with four hard drives (two smaller drives for the OS, and two large drives for VM data), with a RAID-1 configuration. Specifically:

I've done the same for my new server:
supermicro i7 cpu
12 gig ram (1 bad strip, but that happens so currently only 10 left)
4 x 1TB disk (WD black)

I partitioned all drives with a 512Mb first prim partition and a 997gig sec prim
partition. I'm using sda1 & sdb1 as raid1 for my /boot
I'm using sdc1 & sdd1 as swap (not that i hope i'll ever need it ;-)
I'm using /dev/sd[abcd]2 as a raid10 setup. (It's still rebuilding as
i type this)

# cat /proc/mdstat
Personalities : [raid1] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
497856 blocks [2/2] [UU]

md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
1952523904 blocks 64K chunks 2 near-copies [4/4] [UUUU]
[==>..................] resync = 13.8% (269632640/1952523904) finish=141.3min speed=198474K/sec

To get to this i used another approach:
I installed pve to a small 5th drive (motherboard has sata 6 connectors)
Then i booted sysrescuecd and sort followed the steps from
http://www.howtoforge.com/set-up-raid1-on-a-running-lvm-system-debian-etch
but it wasn't from a "live" environment. Using the smal (60G) install cd saved me a lot of time with pvmove ( i timed yesterday with live migration from one of the 4 1TB drives but that pvmove would take almost 48 hours). This time it was only 1 hr.
I resized the lvm's, and instead of ext3 i use xfs.

So far it's looking pretty good (i think):

# pveperf
CPU BOGOMIPS: 42561.90
REGEX/SECOND: 496549
HD SIZE: 99.95 GB (/dev/mapper/pve-root)
BUFFERED READS: 118.38 MB/sec
AVERAGE SEEK TIME: 8.00 ms
FSYNCS/SECOND: 4974.88
DNS EXT: 38.24 ms

Fromport
 
Great thanks for Robstarusa and orudge for helpfull topic!

Have a lot of experience in MS and VMWare virtualisation but not so strong in Linux yet, learning only.
Here is my configuration:

Core2Duo
4Gb RAM
250 Gb - Main HDD (which proxmox is installed onto)
3*500 Gb - RAID5 array

My aim next:
1) Use Main HDD for system
2) Use RAID array for VM's

I have completed all steps from Robstarusa manual and now I have following:

pmnode01:~# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 pve lvm2 a- 931.52G 703.13G
/dev/sda2 pve lvm2 a- 232.38G 232.38G

pmnode01:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 2 3 0 wz--n- 1.14T 935.51G

pmnode01:~# lvs
LV VG Attr LSize Origin Snap% Move Log Copy%
data pve -wi-ao 166.39G
root pve -wi-ao 58.00G
swap pve -wi-ao 4.00G

As I understand pve still on Main HDD.
How to move pve to RAID array and delete from Main HDD?

Sorry if I lost something.
Thanks for help!
 
I have spent an afternoon setting up raid 5 on a test server we had (since it was a test server, it gets none of that expensive raid stuff). The procedure I used was about the same I've seen used here.

Make a raid 5 with a disk missing, extend the volume group to the raid, move the data, remove the original disk from the vol group and add it to the raid, making it consistent again... also, some resizing may be necessary :D

After reading all the comments on the proxmox team about RAID, and also reading quite a lot about SW RAID, I still have to disagree. I think sw raid should be an install option and from previous recoveries, I think the argument about being harder to get the data from a sw raid is moot. Is this still not an option for proxmox 2.0? It'd save a lot of people a lot of time and I think an opinion pool would show that many find sw raid something handy.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!