Assign some hard disks to a OpenVz CT

lojasyst

Renowned Member
Apr 10, 2012
13
0
66
Hello

I wrote this post in Proxmox VE 1.x. I hope the moderator erase that post.


I am setting up a container running the cache server squid. The server running proxmox is a dell poweredge 2950 with 6 15K SAS HDs, 32GB ram. Dual processor.

I am not so much concerned about reliability or redundancy of the informacion included in the cache store because is regenerable. I just need the max performance.

I know I can join all hard disks through RAID or LVM to have only one big storage cache but for performance concerns I want to have 4 squid cache stores, each one using an individual hard disk.

So the question

How can I assign 4 separate HDs to a openvz container?

Thanks
 
You can't as far as I know. Its a container so it already has close to raw access to the disk the container is on.

/b
 
Hello

I wrote this post in Proxmox VE 1.x. I hope the moderator erase that post.


I am setting up a container running the cache server squid. The server running proxmox is a dell poweredge 2950 with 6 15K SAS HDs, 32GB ram. Dual processor.

I am not so much concerned about reliability or redundancy of the informacion included in the cache store because is regenerable. I just need the max performance.

I know I can join all hard disks through RAID or LVM to have only one big storage cache but for performance concerns I want to have 4 squid cache stores, each one using an individual hard disk.

So the question

How can I assign 4 separate HDs to a openvz container?

Thanks



That server probably has a Perc 5i or 6i. It has an LSI chip and will be much faster than individual disks.

Setup raid 0 on the controller if you don't care about the data, but I would use raid 10. Saves you from reinstalling if a disk blows up. I have the exact same machine with the same specs running 4 VMs and it flies. Pretty well no I/O delays. 2 of the VMs have HA Proxy and Varnish caches running. Never had any I/O issues.

My setup is RAID 10:

lspci: RAID bus controller: Dell PowerEdge Expandable RAID controller 5

blockdev --setra 4096 /dev/mapper/pve-root
blockdev --setra 4096 /dev/mapper/pve-data

pveperf:
BUFFERED READS: 432.12 MB/sec
AVERAGE SEEK TIME: 5.13 ms
FSYNCS/SECOND: 3160.54

Second 2950 but with 7200 rpm SAS instead of 15k:

BUFFERED READS: 314.58 MB/sec
AVERAGE SEEK TIME: 8.63 ms
FSYNCS/SECOND: 2447.91

This will beat the pants off single SAS disks any day, even with 4 VMs.