Fileserver based an Proxmox and OMV

ranseyer

New Member
Jul 31, 2015
7
0
1
Hi,

my goal to renew my fileserver (thats hosting some more services, all running now on Ubuntu 14.04)
The Fileserver shall have an good performance at low CPU usage (CIFS with ~100Megabyte/s), and shall be protected a little bit to HDD-Failures. (Backup will be done additional)

So the plan is to create a RAID5 on 4x4TB WD-Red Disks. The RAID was setup by OMV in an KVM. (On Top is a LV , ..., and a EXT4 Filessystem)
Base-machine is a HP Microserver Gen8 with 10GB ECC RAM an in the Momant the default Celeron CPU with VT-D (Upgrade to a XEON later if its really nessesary)

OVM under KVM runs "clean without errors", but its really slow. 30-80Megbyte/s over Gigabit-LAN. And small files really slow. CPU load ist at 40-70% when files are read oder written by only one user.

A) Next idea was to use LXC. But i dont see the disks in the LXC with fdisk. When i want to use this kind of setup i have to see the raw-disks and not only an mountpoint.

B) Is there an better way to provide an fast RAID5 to "OMV" with less CPU Load ?


Sorry for the lot of questions, i have a lot of linux experience, but not with proxmox.

Greetings and thanks for every answer.




I would like to ask:

Is it possible to access directly to hard-disks of the host in a container ?
(I'm playing with a RAID5 created with OpenMediaVault in a Proxmox KVM, when i enter "fdisk -l" i see nothing i my container)
If its possible i will open a new Thread for some strange reactions of the system...



The Old Config for KVM:

root@host:/etc/pve/nodes/host/qemu-server# cat 101.conf
...
virtio2: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E1SJA51R,size=3907018584K
virtio3: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6RKT37L,size=3907018584K
virtio4: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E3NA1C0X,size=3907018584K
virtio5: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E0AA8LS6,size=3907018584K


yes you can give LXC mount points, but this is not implemented in PVE4.



"fdisk -l" on the host. The creation of this fdisk output takes 15-20 Seconds...
root@host:/etc/pve/nodes/host/lxc/102# fdisk -l

Disk /dev/loop7: 22 GiB, 23622320128 bytes, 46137344 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sde: 3.7 GiB, 3963617280 bytes, 7741440 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000


Device Boot Start End Sectors Size Id Type
/dev/sde1 * 8192 7741439 7733248 3.7G b W95 FAT32


Disk /dev/sdf: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D0651D95-BE8A-43D9-ADBC-65FE2D5D0FE5


Device Start End Sectors Size Type
/dev/sdf1 34 2047 2014 1007K BIOS boot
/dev/sdf2 2048 262143 260096 127M EFI System
/dev/sdf3 262144 488397134 488134991 232.8G Linux LVM


Disk /dev/mapper/pve-root: 58 GiB, 62277025792 bytes, 121634816 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-swap: 9 GiB, 9663676416 bytes, 18874368 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-data: 149.8 GiB, 160805421056 bytes, 314073088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@host:/etc/pve/nodes/host/lxc/102#
 
Last edited:
Hi,

do you try direct pass through your disk to the kvm?
then you have no overhead.

"qm set <VMID> -virtio<X> /dev/disk/by-id/<UUID>"
 
Yes i did.

I had written above:
root@host:/etc/pve/nodes/host/qemu-server# cat 101.conf
...
virtio2: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E1SJA51R,size=3907018584K

I have done one more Test directly on the target-hardware with native Ubuntu with the same result. The Celeron CPU is to slow for RAID5 (30-70% used). I had alread planed to buy an XEON, but now it has to go faster...


Second thing: Direct Access to Disks in a (LXC) Container.
I have no real way found to use this.
a)no way to use a disk 100% directly
b)via a mounted disk
-i can mount a fs on the host
-Proxmox uses Images for the root fs, so its not possible to mount something inside
-if a mounted disk shall be used in a LXC container then the virtual system has to be manipulated that it boots not from an image, but from an directory, so the container can see the inside mounted disc(s)
-i'm not shure if this is an good idea because many features of Proxmox are not usable with such an container...

Now i switched back to KVM for my new Fileserver. When the XEON CPU is installed i will decide finally if use this conzept.
 
Yes i will try to buy this (if possible in the next time: "used"). But "V2" becauce the HP Gen8 Server has socket 1155.
Thanks for the hint.
 
Yes i did.

I had written above:


I have done one more Test directly on the target-hardware with native Ubuntu with the same result. The Celeron CPU is to slow for RAID5 (30-70% used). I had alread planed to buy an XEON, but now it has to go faster...


Second thing: Direct Access to Disks in a (LXC) Container.
I have no real way found to use this.
a)no way to use a disk 100% directly
b)via a mounted disk
-i can mount a fs on the host
-Proxmox uses Images for the root fs, so its not possible to mount something inside
-if a mounted disk shall be used in a LXC container then the virtual system has to be manipulated that it boots not from an image, but from an directory, so the container can see the inside mounted disc(s)
-i'm not shure if this is an good idea because many features of Proxmox are not usable with such an container...

Now i switched back to KVM for my new Fileserver. When the XEON CPU is installed i will decide finally if use this conzept.

nfs is your friend :)
 
What do you mean exactly?

sorry if my answer wasn't helpfull enough.

i've never tried to mount a local harddrive to a lxc container.

i only got two things working:
- iscsi mounts
- nfs mounts

that why i answered "nfs is your friend"

regards

fine
 
my solution for that was to install a kvm with a iscsi target. if you need the space for multiple machines you can try to handle it via glusterfs or ocfs ..
after them you can connect the iscsi initiator from each lxc container over the internal bridge.
(i'm using vlan 99 for iscsi connection on a internal interface. in that case you can push it through MTU 9000)

thats my iscsi solution for that problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!