Server down. Last screen shot. Please advise.

ozgurerdogan

Renowned Member
May 2, 2010
604
5
83
Bursa, Turkey, Turkey
This was a brandnew hp DL180G6 with raid0 (two sata disk only). It was build couple days ago. But today it is not starting up. Last screen shot is like below: I would be happy if you could assist me to which hardwarew I should investigate:

20120430_103103.jpg
 
Last edited:
After googling sometime, some centos and debian marked very similar issues as bug in kernel. Not sure if it is. One of sata disk has one sector as BAD so I sent it to warranty. I hope that was the issue. Because I have to install system again...
 
This was a brandnew hp DL180G6 with raid0 (two sata disk only). It was build couple days ago. But today it is not starting up. Last screen shot is like below: I would be happy if you could assist me to which hardwarew I should investigate:
Why not use linux raid10 -> http://www.ilsistemista.net/index.p...-raid-5-vs-raid-10-and-other-raid-levels.html

Defining a chunk size of 1024k and layout f2
mdadm --create /dev/md(x) --chunks=1024 --layout=f2 --level=raid10 --raid-devices=2 /dev/sd(x) /dev/sd(y)
creating the ext3 file system this way:
mkfs.ext3 -E stride=256,stripe-width=512 -b 4096 -O dir_index /dev/md(x).
Use these mount options:
writeback,nobh,barrier=0,noatime and you get excellent performance.

Some readings:
root@esx1:~# hdparm -tT /dev/sda /dev/sdb


/dev/sda:
Timing cached reads: 7128 MB in 2.00 seconds = 3565.51 MB/sec
Timing buffered disk reads: 312 MB in 3.02 seconds = 103.46 MB/sec


/dev/sdb:
Timing cached reads: 7106 MB in 2.00 seconds = 3554.31 MB/sec
Timing buffered disk reads: 280 MB in 3.04 seconds = 92.18 MB/sec
root@esx1:~# hdparm -tT /dev/md2


/dev/md2:
Timing cached reads: 7116 MB in 2.00 seconds = 3559.07 MB/sec
Timing buffered disk reads: 614 MB in 3.06 seconds = 200.49 MB/sec

The above shows nearly read parity between raid0 and raid10, Writing is of course not at parity:)
 
Great but what is the relation with my issue? Anyway I changed the harddisk and so far it seems ok.
You mentioned you have used RAID1 which more or less are what people automatically selects for safety. I just showed that you can more that double the performance and still have the same safety which RAID1 provides. One other thing is that I have experienced what your screenshot shows and if I remember correct this situation has always been on systems using RAID1. If I remember correct the situation seems to arise if the machine is rebooted a couple of times when the array is doing resync.
 
You mentioned you have used RAID1 which more or less are what people automatically selects for safety. I just showed that you can more that double the performance and still have the same safety which RAID1 provides. One other thing is that I have experienced what your screenshot shows and if I remember correct this situation has always been on systems using RAID1. If I remember correct the situation seems to arise if the machine is rebooted a couple of times when the array is doing resync.

No, actually the OP said they were using RAID0 (Striping) with TWO drives... You can't do RAID-10 with only two drives, you need at least 4...

:)
 
You can't mirror and stripe simultaneously to two drives, that doesn't even make sense...

Actually you can and this is explained on wikipedia, for example this little bit of info:
"The first 1/f of each drive is a standard RAID-0 array. This offers striping performance on a mirrored set of only 2 drives."

I have a few servers running RAID 10 on two disks because it performs better than RAID1 for the particular workload they handle.
 
Actually you can and this is explained on wikipedia, for example this little bit of info:
"The first 1/f of each drive is a standard RAID-0 array. This offers striping performance on a mirrored set of only 2 drives."

I have a few servers running RAID 10 on two disks because it performs better than RAID1 for the particular workload they handle.
Yes, and it works surprisingly well here below my PVE:
cat /proc/mdstat
Personalities : [raid10]
md2 : active raid10 sda3[0] sdb3[1]
1446703104 blocks super 1.2 1024K chunks 2 far-copies [2/2] [UU]

md1 : active (auto-read-only) raid10 sda2[0] sdb2[1]
8189952 blocks super 1.2 1024K chunks 2 far-copies [2/2] [UU]

md0 : active raid10 sda1[0] sdb1[1]
10237952 blocks super 1.2 1024K chunks 2 far-copies [2/2] [UU]

And I have documented before in this thread that the specific linux raid10 more than doubles the read performance compared to the psychical device.
 
Well, if e100 is doing it I must concede... However, I fail to understand how this could possibly yield good long-term results... I mean, sure - technically you COULD do RAID-6 on a single drive if you broke it up into 6 separate partitions and then treated each partition as a separate drive, but what's the point??
 
Like you, I too at first thought "how the $%^$ can you have a stripe and mirror with only two disks!?!?!"
But once you actually read and understand what is going on you might be inclined to give it a try and benchmark it yourself, that is what I did and it was so good I ended up using it in four mostly read-only NFS servers.

I have added some emphasis below that highlights why you would want to do this and how it actually works.

From the md man page:
Raid10

RAID10 provides a combination of RAID1 and RAID0, and is sometimes known as RAID1+0. Every datablock is duplicated some number of times, and the resulting collection of datablocks are distributed over multiple drives.

When configuring a RAID10 array, it is necessary to specify the number of replicas of each data block that are required (this will normally be 2) and whether the replicas should be 'near', 'offset' or 'far'. (Note that the 'offset' layout is only available from 2.6.18).

When 'near' replicas are chosen, the multiple copies of a given chunk are laid out consecutively across the stripes of the array, so the two copies of a datablock will likely be at the same offset on two adjacent devices.

When 'far' replicas are chosen, the multiple copies of a given chunk are laid out quite distant from each other. The first copy of all data blocks will be striped across the early part of all drives in RAID0 fashion, and then the next copy of all blocks will be striped across a later section of all drives, always ensuring that all copies of any given block are on different drives.

The 'far' arrangement can give sequential read performance equal to that of a RAID0 array, but at the cost of reduced write performance.

When 'offset' replicas are chosen, the multiple copies of a given chunk are laid out on consecutive drives and at consecutive offsets. Effectively each stripe is duplicated and the copies are offset by one device. This should give similar read characteristics to 'far' if a suitably large chunk size is used, but without as much seeking for writes.

It should be noted that the number of devices in a RAID10 array need not be a multiple of the number of replica of each data block; however, there must be at least as many devices as replicas.

If, for example, an array is created with 5 devices and 2 replicas, then space equivalent to 2.5 of the devices will be available, and every block will be stored on two different devices.

Finally, it is possible to have an array with both 'near' and 'far' copies. If an array is configured with 2 near copies and 2 far copies, then there will be a total of 4 copies of each block, each on a different drive. This is an artifact of the implementation and is unlikely to be of real value.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!