Re: software raid1 seems to work ok
I have a testing box that didn't have a hardware raid card in it so I decided to see if I could get this going for grins. I'm fairly certain this whole setup would break at the least desirable moment and I certainly won't be using this for any production boxes, but it was an interesting diversion into the world of software raid which I'd never used before. The basic order I followed is:
Install Proxmox as usual onto 1 drive(/dev/sda)
aptitude install mdadm initramfs-tools
Edit the modules list for initramfs-tools to force add the raid1 module
mkinitramfs -o /boot/test -r /dev/mapper/pve-root
add a grub list entry to point to my new initrd image
fdisk the 2nd disk to look exactly like the first disk
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
pvcreate /dev/md1
vgextend pve /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce pve /dev/sda2
mdadm --add /dev/md1 /dev/sda2
watch -n 1 "cat /proc/mdstat"
mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot /mnt/md0
edit fstab to map the /boot to /dev/md0
sfdisk --change-id /dev/sda1 fd
mdadm --add /dev/md0 /dev/sda1
use grub to install bootloader onto the 2nd hard drive
I followed your steps to get software raid-5 working with proxmox. you left some steps out that I had to figure out on my own. To help the next person, here is what I did:
I purchased 6 identical SATA 1TB drives that I wanted to create a 5TB raid-5 with. Unfortunately the motherboard I purchased, a MSI 880GMA-E45, while it said in its specs that it does RAID-5, its some AMD fakeraid worthless junk. So either I can buy a $200 Highpoint RocketRAID controller that supports 6 SATA drives, or I can software raid my drives. I chose to get software raid working, and had great success.
1. Configured BIOS in AHCI mode for the SATA drives
2. Install proxmox as normal onto /dev/sda
3. Get internet access working on Proxmox if you haven't done so all ready
4. aptitude install mdadm initramfs-tools
5. edit /etc/initramfs-tools/modules (i like the text editor nano, so you would run nano /etc/initramfs-tools/modules after logging in as root at the text console) and at the very bottom have a new line with just the string raid5
6. run update-initramfs -u to update the boot process with the raid5 module for the kernel.
7. Reboot
8. login as root again. fdisk /dev/sda and then hit p to print your partitions. You will see that /dev/sda1 has a start cyl of 1 and an ending cylinder of some other small number. in my case it ends in 66, and partition 2 starts at cylinder 67. So on all the other drives, I will start partition one at cylinder 67 and go to the end of the drive.
9. So run fdisk /dev/sdb and create a (n) new partition, primary partition 1, start at cylinder 67, and end at the end of the drive. Change the type (L) to fd (Linux Raid auto). Then hit w to save and quit.
10. repeat for sdc, sdd, sde, sdf.
11. create the software raid 5 with 5 of the 6 disks necessary, so you will tell it that the raid is failed. Run the command mdadm --create /dev/md0 --level=5 --raid-devices=6 missing /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
12. add the raid 5 to the mdadm admin config file with mdadm --detail --scan >> /etc/mdadm/mdadm.conf
13. create the lvm on the raid 5 with pvcreate /dev/md0
14. extend the lvm to include the raid 5 with vgextend pve /dev/md0
15. you need to move the proxmox data that resides on the lvm on /dev/sda2 onto /dev/md0 so you will run pvmove /dev/sda2 /dev/md0. This took 8 hours to run for me, so go take a nap. It will print a % progress counter on the screen every 30 seconds or so.
16. Once that is complete, you will remove the lvm on /dev/sda2 by running vgreduce pve /dev/sda2
17. you will need to create a software raid partition on /dev/sda2 now, so basically you will run fdisk /dev/sda2, delete partition 2, create a new partition 2 starting at cyl 67 in my case, change the type to fd again, and write it. You will get a warning about the partition table is in use and the kernel will read it on the next boot.
18. At this point you can either reboot for the new partition table to take effect, or aptitude install parted (or it might be partd) and then you can run partprobe to update the kernel's partition table without rebooting.
19. now you will need to add /dev/sda2 to the raid5. run mdadm --add /dev/md0 /dev/sda1
20. run watch -n 1 "cat /proc/mdstat" and wait a couple of hours while the raid5 is rebuilt with the new drive added. you won't lose any data.
21. now you need to increase the size of the lvm to expand to the rest of the drives in the raid 5. in my case I ran lvextend --size +3TB -n /dev/mapper/pve-data and then kept running lvextend --size +100GB -n /dev/mapper/pve-data until it gave me an error about not enough blocks available because I had less than 100G allocated. You can then go to smaller increments then 100GB to full up all of your HD space.
22. Now you will need to resize the ext3 filesystem to expand to the end of the drives. I had about 4 TB that I had to expand onto, so when I resized my ext3 filesystem, it took about an hour to run. So just run resize2fs /dev/mapper/pve-data and it will expand the ext3 filesystem.
23. You are done, now you can run df -h and you will see something like:
proxmox:/var/lib/vz/images# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 95G 989M 89G 2% /
tmpfs 7.9G 0 7.9G 0% /lib/init/rw
udev 10M 824K 9.2M 9% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
/dev/mapper/pve-data 4.4T 101G 4.3T 3% /var/lib/vz
/dev/sda1 504M 58M 422M 12% /boot
at this point you may want to reboot just to ensure that proxmox comes up ok, it should. I didn't have to make any changes to grub. I hope someone else finds this information useful.