Need an opinion for a new build

jim.bond.9862

Renowned Member
Apr 17, 2015
395
34
68
[FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]Hi,as the subj. says, I need an opinion for my nee home serverbuild.[/FONT]
[FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]Forthe last month I have been playing with a proxmox 4 beta install ondebian 8. I whent that way because I need a btrfs support on thehost. It us nice and all but I have been having some issues here andthere, in addition to the fact that I had bad ram stick in the systemthat [/FONT][FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]corrupted[/FONT][FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]2 of my data drives beyond recovery. But anyhow, what I want/need is, on the host, a btrfs support, an nfs support and a hardware passthrough to vm. Mainly I think if I can setup a decent nfs serverconfig the pass through can wait. [/FONT]
[FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]Myidea is to simple nfs share all my data drives f[/FONT][FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]r[/FONT][FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]omthe host to openmediavault vm. From there do a [/FONT][FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]SAMBA[/FONT][FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]and ftp and maybe even nfs shares of specific folders from the mainnfs share of the host. Only the omv vm will/should see the host nfsexport. All other clients would get the data via omv.[/FONT]

[FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]Questionsare.[/FONT]
[FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]CanI do/have all this on proxmox 3.4? And how.[/FONT]

[FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]DoI need to stick with proxmox 4 betta for all of this and setup like I[/FONT][FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]h[/FONT][FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]avetoday, install deb 8 than load proxmox 4 and so on.[/FONT]

[FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]Pleasegive me options for this build.[/FONT]

[FONT=Verdana, Arial, Tahoma, Calibri, Geneva, sans-serif]Sentfrom my phone[/FONT]
 
Last edited:
Forget BTRFS! It is not really stable!!! And also not easy. I use Proxmox a long time ago, since beginn of the year with ZFS and HW pass through (DVB-S Card). And it works fine and stabel. ZFS is supported from Proxmox. BTRFS, i don't know, but i think not.

Working with ZFS is only a good thing when you have and real SATA/SAS Controller, no HW Raid or some shitty Fakeraidcontroller. Best one is
the ServeRAID M1015

http://www.redbooks.ibm.com/abstracts/tips0740.html



Best Regards.
 
Ok fireon, I get what you are saying, but I have some issue with zfs, so I would like to stick with btrfs. I have been running server with btrfs data pool for the last 3 years and like it so far. Had only one major glitch in 3 years and it was do to bad ram stick. So keeping btrfs support , do I stick with version 4 beta or should I roll back to 3.4 for all my needs as in op.

Sent from my phone
 
hmm, i never tested btrfs myself. Only listed to forums and reports. When you say it is the right for you then use it. I will not use it. Can you tell me what kind of problems did you had with ZFS in the past? Maybe i can help you.
 
The controller you pointed to has not JBOD support AFAIK. Probably you configure disks as belonging to "n" single raid0 devices, but I think this puts anyway some special raid data on the disk and makes migrate to other controller (brand / model) a problem (let's say the server controller stop working and you don't have another like that), or am I wrong?
 
The controller you pointed to has not JBOD support AFAIK. Probably you configure disks as belonging to "n" single raid0 devices, but I think this puts anyway some special raid data on the disk and makes migrate to other controller (brand / model) a problem (let's say the server controller stop working and you don't have another like that), or am I wrong?

I don't think that is the case. first of all I do not use hardware RAID. I have a basic plain SuperMicro SATA card PCI-X to 8 SATA expansion card which does not support hardware raid at all. it is an expansion card for JBOD use.
second I did not use the software RAID(aka mdam) either. I used a btrfs built in raid capability that I had been using for the last 3 years on different disk pool.
last year I got a pair of seagate 3TB barracuda drives that I want to migrate my data too.
as btrfs wiki boasts the raw device capability of btrfs I figure I try it. so I build my new pool on raw devices, as opposed to partition and format the drives first than building the pool,
things is when you partition the drives first and build the pool from partition btrfs gives warning about volume already containing the file system, yada.. yada.. yada.. and warning about overriding it. BUT apparently there is a bug some where that can cause full data loss in a multi-device pool based on raw devices if a master device fails. (as far as I can tell Master Device is any drive that was first in the list when pool was created, as in if you create pool using /dev/sdc /dev/sdb /dev/sdd the /dev/sdc will be the master. even though there should not be any preferences there. )
so if you loose any other drive in the pool the pool works and can be mounted in degraded mode and accessed fine. but if you loose the master device all is gone.
I was able to replicate this behavior several times using VM setup,

there is no issue if you pre partition the drives beforehand and use partitions to build the pool. as in

/dev/sdc1 /dev/sdb1 /dev/sdd1. in this case any device can fail and be replaced most often with no ill effects.
again I tested this in VM setup multiple times and can confirm. that 9 time out of 10 under the same conditions the raw device pool is lost and the partitioned device pool works as expected with no usability lost on device failure.

this is all using a native BTRFS built in Raid capability. as of this moment I can only attest for using raid1 since Raid 5/6 was not available last year and even now 5/6 mode are not stable.
I did not have enough disks for raid10.
 
well, ZFS did not actually failed for me. just I think it is not to straight forward to use,and at the time the memory requirements where too steep for me. also it is not as flexible as btrfs. you can not easily convert raid levels on the fly, nor expand/shrink pools as needed.
anyhow, my question is not really if I should use btrfs or zfs or any other file system.more pressing issue is should I get the proxmox 3.4 ISO and load that or use the proxmox 4 beta setup on Debian 8 as I do now.
what is 3.4 based on? Debian 6? 7? what kernel does it uses? can it support btrfs if I login to CLI and apt-get btrfs-tools to it.
this are the questions I am trying finding answer for at this moment.

because of my ram issues I think my setup is a bit hosed. so I am incline to start anew, but maybe with a more stable version if it applys.

thanks.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!