Raid Problem

twister666

New Member
May 11, 2011
14
0
1
Montreal, Quebec, Canada
Hello everyone!

I hope someone can help me out with my RAID problem. I have installed proxmox 2.0 on a separate hard drive of 60GB that is used to hold Proxmox and ISOs. I have also a RAID10 array of 4TB for images and data. I am using RocketRaid 3530 controller card with 8 x 1TB WD Caviar Black drives (TLER was disabled on each drive). My array is configured with 4k sector size and 1024k stripe size. I tested with MS Server 2008R2 and the array is working pretty well so physically the disks and RAID card is working fine. In Proxmox I am having trouble installing KVM machines on RAID array. I mean I can create virtual hard drives for VMs on the array. I can see the VM's virtual drive but when system tries to write changes it fails.

Here is some of my config data.


I partitioned the RAID array with parted as follows.
root@vsrv0:~# parted /dev/sda
GNU Parted 2.3
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) mkpart primary ext4 1 100%
(parted) align-check optimal 1
1 aligned
(parted) print
Model: HPT VD0-0 (scsi)
Disk /dev/sda: 4000GB
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt​
Number Start End Size File system Name Flags
1 1049kB 4000GB 4000GB primary



Then I applied file system on the array:

root@vsrv0:~# mkfs.ext4 -m 0 /dev/sda1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=0 blocks
244170752 inodes, 976682496 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
29806 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.


Then I mounted the array

root@vsrv0:~# mkdir /mnt/sda1
root@vsrv0:~# mount /dev/sda1 /mnt/sda1


I added it as a Directory with /mnt/sda1 as mount point.


This is my lspci:

root@vsrv0:~# lspci
00:00.0 Host bridge: Intel Corporation 5400 Chipset Memory Controller Hub (rev 20)
00:01.0 PCI bridge: Intel Corporation 5400 Chipset PCI Express Port 1 (rev 20)
00:05.0 PCI bridge: Intel Corporation 5400 Chipset PCI Express Port 5 (rev 20)
00:09.0 PCI bridge: Intel Corporation 5400 Chipset PCI Express Port 9 (rev 20)
00:0f.0 System peripheral: Intel Corporation 5400 Chipset QuickData Technology Device (rev 20)
00:10.0 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev 20)
00:10.1 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev 20)
00:10.2 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev 20)
00:10.3 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev 20)
00:10.4 Host bridge: Intel Corporation 5400 Chipset FSB Registers (rev 20)
00:11.0 Host bridge: Intel Corporation 5400 Chipset CE/SF Registers (rev 20)
00:15.0 Host bridge: Intel Corporation 5400 Chipset FBD Registers (rev 20)
00:15.1 Host bridge: Intel Corporation 5400 Chipset FBD Registers (rev 20)
00:16.0 Host bridge: Intel Corporation 5400 Chipset FBD Registers (rev 20)
00:16.1 Host bridge: Intel Corporation 5400 Chipset FBD Registers (rev 20)
00:1c.0 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 1 (rev 09)
00:1c.1 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 2 (rev 09)
00:1c.2 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 3 (rev 09)
00:1c.3 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 4 (rev 09)
00:1d.0 USB controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #1 (rev 09)
00:1d.1 USB controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #2 (rev 09)
00:1d.7 USB controller: Intel Corporation 631xESB/632xESB/3100 Chipset EHCI USB2 Controller (rev 09)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d9)
00:1f.0 ISA bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC Interface Controller (rev 09)
00:1f.1 IDE interface: Intel Corporation 631xESB/632xESB IDE Controller (rev 09)
00:1f.2 SATA controller: Intel Corporation 631xESB/632xESB SATA AHCI Controller (rev 09)
00:1f.3 SMBus: Intel Corporation 631xESB/632xESB/3100 Chipset SMBus Controller (rev 09)
01:02.0 VGA compatible controller: XGI Technology Inc. (eXtreme Graphics Innovation) Z7/Z9 (XG20 core)
02:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller
03:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller
04:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE6145 SATA II PCI-E controller (rev a1)
05:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE6145 SATA II PCI-E controller (rev a1)
06:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express Upstream Port (rev 01)
06:00.3 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express to PCI-X Bridge (rev 01)
08:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express Downstream Port E1 (rev 01)
08:02.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express Downstream Port E3 (rev 01)
09:00.0 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
09:00.1 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
0c:00.0 RAID bus controller: HighPoint Technologies, Inc. Device 3530 (rev 09)



Thank
 
Hi,
you have defined as content (in the storage menu) images?
Does pve create the sub-directory (/mnt/sda1/images)?
Also the raw-disk of the VMs will be created? (/mnt/sda1/images/VMID/vm-VMID-disk-1.raw).

Does the same issue happens, if you set the Cache of the VM-hdd to Write through?

Any hint in the log-files?

Udo
 
Hi udo!
Yes, I have /mnt/sda1/images directory and raw file is created under vm folder like usually eg: /mnt/sda1/images/101/vm-101-disk1.raw. And in VM I see hard drive but can't edit it. I am not sure what it can be the cause. On raid controller it is set as Write Back. I will try to check if changing to Write Through will help. Hope I don't have to re-initialize the array.


Could you please let me know what log files should I check. I am pretty new to linux.

Thanks
 
Hi udo!
Yes, I have /mnt/sda1/images directory and raw file is created under vm folder like usually eg: /mnt/sda1/images/101/vm-101-disk1.raw. And in VM I see hard drive but can't edit it. I am not sure what it can be the cause. On raid controller it is set as Write Back. I will try to check if changing to Write Through will help. Hope I don't have to re-initialize the array.


Could you please let me know what log files should I check. I am pretty new to linux.

Thanks

Hi,
"write through" is a setting from pve, not on the raidcontroller!
Stop the VM, Select VM -> Hardware, select disks -> Edit -> Cache: select "Write through". Start VM.

How do you want to edit the disk inside the VM? Which error occur?

Can you post the config of the VM?
Code:
cat /etc/pve/qemu-server/VMID.conf

Udo
 
Thank you Udo for helping out. As soon as I have changed to Write Through as you said the virtual disk became editable and now works. Udo is there a specific reason why the setting for my future vms needs Write Trough instead of Default setting?