Hi,
my goal to renew my fileserver (thats hosting some more services, all running now on Ubuntu 14.04)
The Fileserver shall have an good performance at low CPU usage (CIFS with ~100Megabyte/s), and shall be protected a little bit to HDD-Failures. (Backup will be done additional)
So the plan is to create a RAID5 on 4x4TB WD-Red Disks. The RAID was setup by OMV in an KVM. (On Top is a LV , ..., and a EXT4 Filessystem)
Base-machine is a HP Microserver Gen8 with 10GB ECC RAM an in the Momant the default Celeron CPU with VT-D (Upgrade to a XEON later if its really nessesary)
OVM under KVM runs "clean without errors", but its really slow. 30-80Megbyte/s over Gigabit-LAN. And small files really slow. CPU load ist at 40-70% when files are read oder written by only one user.
A) Next idea was to use LXC. But i dont see the disks in the LXC with fdisk. When i want to use this kind of setup i have to see the raw-disks and not only an mountpoint.
B) Is there an better way to provide an fast RAID5 to "OMV" with less CPU Load ?
Sorry for the lot of questions, i have a lot of linux experience, but not with proxmox.
Greetings and thanks for every answer.
"fdisk -l" on the host. The creation of this fdisk output takes 15-20 Seconds...
my goal to renew my fileserver (thats hosting some more services, all running now on Ubuntu 14.04)
The Fileserver shall have an good performance at low CPU usage (CIFS with ~100Megabyte/s), and shall be protected a little bit to HDD-Failures. (Backup will be done additional)
So the plan is to create a RAID5 on 4x4TB WD-Red Disks. The RAID was setup by OMV in an KVM. (On Top is a LV , ..., and a EXT4 Filessystem)
Base-machine is a HP Microserver Gen8 with 10GB ECC RAM an in the Momant the default Celeron CPU with VT-D (Upgrade to a XEON later if its really nessesary)
OVM under KVM runs "clean without errors", but its really slow. 30-80Megbyte/s over Gigabit-LAN. And small files really slow. CPU load ist at 40-70% when files are read oder written by only one user.
A) Next idea was to use LXC. But i dont see the disks in the LXC with fdisk. When i want to use this kind of setup i have to see the raw-disks and not only an mountpoint.
B) Is there an better way to provide an fast RAID5 to "OMV" with less CPU Load ?
Sorry for the lot of questions, i have a lot of linux experience, but not with proxmox.
Greetings and thanks for every answer.
I would like to ask:
Is it possible to access directly to hard-disks of the host in a container ?
(I'm playing with a RAID5 created with OpenMediaVault in a Proxmox KVM, when i enter "fdisk -l" i see nothing i my container)
If its possible i will open a new Thread for some strange reactions of the system...
The Old Config for KVM:
root@host:/etc/pve/nodes/host/qemu-server# cat 101.conf
...
virtio2: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E1SJA51R,size=3907018584K
virtio3: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E6RKT37L,size=3907018584K
virtio4: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E3NA1C0X,size=3907018584K
virtio5: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E0AA8LS6,size=3907018584K
yes you can give LXC mount points, but this is not implemented in PVE4.
"fdisk -l" on the host. The creation of this fdisk output takes 15-20 Seconds...
root@host:/etc/pve/nodes/host/lxc/102# fdisk -l
Disk /dev/loop7: 22 GiB, 23622320128 bytes, 46137344 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sde: 3.7 GiB, 3963617280 bytes, 7741440 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sde1 * 8192 7741439 7733248 3.7G b W95 FAT32
Disk /dev/sdf: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D0651D95-BE8A-43D9-ADBC-65FE2D5D0FE5
Device Start End Sectors Size Type
/dev/sdf1 34 2047 2014 1007K BIOS boot
/dev/sdf2 2048 262143 260096 127M EFI System
/dev/sdf3 262144 488397134 488134991 232.8G Linux LVM
Disk /dev/mapper/pve-root: 58 GiB, 62277025792 bytes, 121634816 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-swap: 9 GiB, 9663676416 bytes, 18874368 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-data: 149.8 GiB, 160805421056 bytes, 314073088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@host:/etc/pve/nodes/host/lxc/102#
Last edited: