Hi everybody, i've been playing with proxmox for some months and really appreciate it ! so i decided to go with this solution for my home server
Now i have to idea and i'm trying to find the best, i'm sure you can help
The server will be used for many vm like firewall, proxy, plex, owncloud, ... and as a NAS ! ( openmediavault )
I have 6 HDD, no raid controller and no ZFS because i don't have ECC ram yet...So the 6HDD are connected directly to my motherboard...
i have found to solution for my need, please help me to choose one....
First idea :
i have just managed to do that today : i installed proxmox on two good usb stick in RAID1, on this ARRAY i have proxmox running and a small raw file for the openmediavault vm. So proxmox and OMV are running from the USB key. I passed the 6HDD directly to OMV because i have Vt-D capable hardware. From the OMV VM ,I created a RAID array that i want to use as a storage for my raw or qcow file from proxmox. So i set up a NFS share on OMV and added it to proxmox. When Proxmox boot, OMV start too and proxmox can find his NFS storage in a minute.
I have made some bench with "dd" and i found that i reach 370Mo/s in write directly on the OMV vm that have the HDD "passed trought" and i lose 70mo/s when I do the same command on the proxmox that use the NFS share. I have already set NFS correctly ( mouting option, MTU, ... ).
Question :
A HDD "passed trought" have the same performance that i would have directly on a physical machine or we loose some IO ?
Why can i reach 300Mo by a single virtual e1000 network card ?!
Second idea ( more simple )
Juste make a raid 10 array with my 6 drive, put proxmox on it and add a lvm as a storage.
Question :
The performance will be FAR better or just a little ?
Is it a problem to have verry large raw or qcow file ? As i said i want to set an openmediavault VM and this VM will have 2.5To of disk size. That's why i wanted to pass the HDD directly to it. I don't know if 2.5To qcow2 or raw file is a good idea...
Please help me with this ^^
Sorry for my english, i do my best, it's easy for me to read but it's more complicated to write...
Now i have to idea and i'm trying to find the best, i'm sure you can help
The server will be used for many vm like firewall, proxy, plex, owncloud, ... and as a NAS ! ( openmediavault )
I have 6 HDD, no raid controller and no ZFS because i don't have ECC ram yet...So the 6HDD are connected directly to my motherboard...
i have found to solution for my need, please help me to choose one....
First idea :
i have just managed to do that today : i installed proxmox on two good usb stick in RAID1, on this ARRAY i have proxmox running and a small raw file for the openmediavault vm. So proxmox and OMV are running from the USB key. I passed the 6HDD directly to OMV because i have Vt-D capable hardware. From the OMV VM ,I created a RAID array that i want to use as a storage for my raw or qcow file from proxmox. So i set up a NFS share on OMV and added it to proxmox. When Proxmox boot, OMV start too and proxmox can find his NFS storage in a minute.
I have made some bench with "dd" and i found that i reach 370Mo/s in write directly on the OMV vm that have the HDD "passed trought" and i lose 70mo/s when I do the same command on the proxmox that use the NFS share. I have already set NFS correctly ( mouting option, MTU, ... ).
Question :
A HDD "passed trought" have the same performance that i would have directly on a physical machine or we loose some IO ?
Why can i reach 300Mo by a single virtual e1000 network card ?!
Second idea ( more simple )
Juste make a raid 10 array with my 6 drive, put proxmox on it and add a lvm as a storage.
Question :
The performance will be FAR better or just a little ?
Is it a problem to have verry large raw or qcow file ? As i said i want to set an openmediavault VM and this VM will have 2.5To of disk size. That's why i wanted to pass the HDD directly to it. I don't know if 2.5To qcow2 or raw file is a good idea...
Please help me with this ^^
Sorry for my english, i do my best, it's easy for me to read but it's more complicated to write...