Proxmox Home Server All-in-one Configuration?

pcmofo

Well-Known Member
Feb 12, 2016
35
0
46
39
Hello everyone. I am new to Proxmox but familiar with VM configuration and networking. I am rewiring all of my house with Cat-6 and business class networking and server hardware. My goal is to create a reliable home network with a variety of servers and services with Proxmox at the center of it all.

I'm hoping I can post my Proxmox server plans and get some feedback on how I should best configure my system.

I recently built a 4U, 24-bay FreeNAS server with the following specs,

Case: SC846E16-R1200B
Power Supply: PWS-920P-SQ
Backplane: BPN-SAS2-846EL1
RAID Card: IBM M1015
Motherboard: X9DRi-LN4F+
Processor: 2x INTEL XEON E5-2670 8 CORE 2.60GHz (2011)
RAM: 64gb (4x 16gb) Hynix HMT42GR7MFR4C-PB

It's way over spec'd for my home use so that I could expand the storage using the same hardware. After playing with Proxmox for a few weeks I think it is feasible to use this hardware as my primary Proxmox server to handle all of my home networking and services. Let me tell you the servers I plan on running and what resources I will be allocating.

cores - memory - hdd - Server type - OS - Notes
16 - 32gb - 8gb - NAS - FreeNAS + direct access to PCIe HBA SAS
1 - 8gb - 8gb - Router - pfSense + direct access to NICs
1 - 1gb - 10gb - Nagios - CentOS
1 - 2gb - 60gb - Torrent - Ubuntu
8 - 16gb - 60gb - NVR - Windows 10 - Needs access to its own 3-drive raidz1 for lots of R/W
2 - 4gb - 60gb - Plex Media Server - Ubuntu/CentOS
1 - 1gb - 10gb - Crashplan - Ubuntu

From what I've read I can basically backup my config for FreeNAS, install Proxmox on the box, create and configure a new FreeNAS VM, upload the config and I'll be good to go as far as the existing FreeNAS service that is running on the server is concerned.

I've also created most of the above VMs on a test server in Proxmox, so I should be able to migrate those servers to the new build easily.

I would add dual 256-512 SSDs in RAID 1 for all of the VMs to run off of. Proxmox itself would run off of a USB drive most likely.

The server currently has 4x Gbe NICs, I have a PCIe card with 2 more, and would add additional ones as needed to support LACP for FreeNAS, dedicated connections for pfSense and the NVR.

To top everything off I will be using the VM of pfSense to route traffic as needed between 4 internal VLANs with one specifically for IP cams and the NVR VM.

Some questions I have regarding this setup,
1. Does this seem like a practical solution given the hardware and services I need to run for 2-10 clients?
2. Is their a way that one VM can talk to another over the network that is faster than going out of Gbe to the switch and back? for example, could Plex or my NVR talk to the FreeNAS box faster than Gbe using an internal network or specific settings since they are both running on the same hardware?
3. Can I over allocate processors/memory to any/all of the VMs? I assume sometimes Plex might be the only server under a heavy load and other times it might be the NVR, is their a configuration that would better dynamically allocate the resources of the server?
4. Would any of these VMs benefit from dedicated NICs?
5. Is their an option to bond multiple physical NICs on the server to Proxmox and then assign a bunch of servers virtual NICs to that bonded group or can I only do bonding inside the VM?


I think thats about all I can come up with for now. I look forward to hearing your thoughts, thanks!
 
1. I would think you've knocked it out of the park on most things. But unless you include a beowulf cluster somewhere, your setup will lack credibility on Slashdot.

2. Yes. You simply define a virtual LAN on a non-routable subnet on which the VMs all communicate. Packets won't leave the box and speeds are only really limited by the CPU and I/O of the host.

3. It depends a bit on what you mean by "over allocate", but you can use memory ballooning with KVM (see the wiki on this). Processor as well if you're using containers, I believe.

4. Possibly - but I would think any performance gains would be capped by other stuff like I/O (and in any case see 5 below).

5. Not sure as I've not tried it, but there's a cryptic entry in the wiki about bonding. If you do investigate this, we would all be grateful if you could flesh out the docs. Proxmox is a great product, but as you can see their documentation sucks rather badly :)
 
Hello pcmofo,

have the same Controller, had also Freenas in the past ;) I flashed it to a real SAScontroller without Raidbios.

Your serverhardware is strong enough. You can set all resources to every vm, but be careful you can go easily to overload. I would split it at least a little. You should also use the cpuunitflags to control cpuworktime.

For the Network we have Layer2 Switchtes. Bonding work only really good with LACP. All other modes have heavly problems in production. The only backupmode works, but yes makes no sense for you .

Communication beween VMs works internal, we copied for about 2 moth an sqldatabase from on vm to the other with over 800mb/s :) but this is also depending on your diskspeed. Freenas as vm is not a good idea. I said we had freenas in the past, also nas4free. But i must say that PVE with ZFS as OS is much more easier to operate with. The Freenas Webinterface was to overloaded and to complicated. ZFS on Proxmox use only two commands: https://pve.proxmox.com/wiki/Storage:_ZFS

2-10 client are ok, but this is also depending what the clients are doing, something at the same time... 10 cleints send an bluraymovie to the server... is 1gbit ok, need an lacp with more interfaces.... but with an trunk you have also only one write access. Maybe you should give every vm two interfaces... or better use 10GBit interfaces on the server to the switch.... Such things should think twice about what you really need.

VLANs are also no problem. Untag the Servervlan, and tag all other vlan to the bonded interface. Then you set the tag directly in the vm interface.

I would use PVE as storage too, do not use freeas as vm. Install PVE set new extra zfspool, create some dataset and share the easy with nfs or samba. We do this so on a lot of server, it work really fine. You can also use an cenral storage incl. 10GBit Interface.
I would like to persuade nothing. We found the operation with ZFS on Proxmox was much easier. Especially has Linuxzfs not such high hardware demands as Unix. But i would say you can do what you want, test it with PVE test it with freenas :)

At my home i alos pass through an PCIe VDRcard, works really fine on GentooVM :)

Good luck with your project!

https://pve.proxmox.com/wiki/Network_Model
 
Your setup looks awesome, but I'd make one suggestion- passing your disks to freenas may not be desirable, especially if you plan to use storage from the freenas instance to your other vms. as a matter of fact, since you're planning on have separate instances for some of what freenas does (torrent, media server, etc) It may be better to simply not bother.

Proxmox is perfectly capable of creating a zfs volume(s), and you can easily share any/all of it using a container serving smb or whatever your transport choice is. Moreover, you can get by with less reserved resources and balloon your way up if you use containers for all your different instances, which will cover all except pfsense (not sure what nvr is.) Which leads to the next point-

why do you want to house your router on the stack? aside the fact that this represents a security risk, it also means that your network becomes unusable when you reboot your supermicro box... get an Edgerouter X and be done with it. it'll perform better too.
 
Thanks guys for all the quick replies, links and info!

Regarding the ZFS storage specifically, I think I would switch away from FreeNAS IF I could simply eject my current zpools and import them into pve without having to backup/migrate the data. I have some test pools I can test this process on first thankfully. I have a 6x 4tb RAIDz2 primary pool, a 6x2tb RAIDz2 backup pool that uses ZFS copy to backup the data on the primary pool. Weekly snapshots are taken of the primary pool so I can roll back mistakes like accidental deletes etc. A large SMB share holds all the media for the torrent VM to drop stuff in and Plex to pull stuff out of the server. Separate shares are setup for Photo/Video editing and individual computer backup shares. The main reason I use FreeNAS is to manage the shares, space/quotas, and users along with all of the ZFS bonuses like emailing me when a SMART event occurs, a drive dies, or an update is available.

Can the ZFS tools in Proxmox mirror FreeNAS functionality such as shares, users, and notifications?

The reason to do pfSense as the router in a VM is 1) I can remote start/manage the VM even if something happens to the box, pfSense supports failover etc so I could have more than 1 instance running if needed or more than one configuration to swap between. 2) I run multiple VPNs in and out using pfSense, I dont believe that the edge router supports openVPN. 3) I'm pretty happy with the pfSense performance and the idea that I could add additional ports or extra memory in the future is awesome. The whole network wont go down if the VM or proxmox goes down as the LAN will all still have IPs assigned, they just wont be able to get out to the internet. Though I do like the idea of using all Ubiquity products... router, switch, & APs.

the NVR is a Network Video Recorder, a windows 10 box running the security camera recoding software Blue Iris, unfortunately it only runs on Windows. I'm planning on around 8x 1080p streams being fed into the box 24/7 in h.264 encoding at around 1.5-3MP and 15-25fps each. Blue Iris handles all of the monitoring and motion detection and can record on motion or 24/7 which would require a dedicated array of preferable WB Purple drives built for NVRs/DVRs to handle the constant write/rewrite. I dont want to blow out my main NAS drives prematurely just because the NVR is running.

The general idea is to leverage the hardware I have an consolidate a bunch of old random hardware I have that are doing these various functions into a single "Home Server" that does all of these tasks. The added benefit of virtualization is I can migrate to new hardware in the future or fix issues with a specific VM by restoring a known working state. If I need more power for the NVR or NAS etc I would simply build a similar second supermicro box and migrate the offending VM to the new hardware to distribute the load. I am in the testing phase right now before I start buying any additional hardware like switches, APs, routers, etc. FreeNAS has been running great for the last 3 months on the supermicro hardware. I have the next 2-3 months free to test Proxmox before I need to start deploying this system in my house. I'm not opposed to building dedicated non-virtualized hardware for the NVR or router if their is a significant benefit.
 
IF I could simply eject my current zpools and import them into pve without having to backup/migrate the data.
Yes, you can.

Can the ZFS tools in Proxmox mirror FreeNAS functionality such as shares, users, and notifications?

Yes, you can have all that functionality and more but not without setup work. I suppose it really depends on your level of comfort of doing traditional system (linux) administration vs having a premade gui. Does the desire for less complexity trump the desire for premade management tools? A possible solution for having your cake and eating it too is to have the pools managed by proxmox, and passing digested storage object(s) to the freenas instance as you would to any other VM.

The reason to do pfSense as the router in a VM is 1) I can remote start/manage the VM even if something happens to the box,
If you mean just because it allows remote access, you dont actually need it at all- you could do that through proxmox. incidentally, if there is a problem with the "box" your pfsense appliance is affected as well.

pfSense supports failover etc
Sure- but whats the point of that with a single proxmox box? it MIGHT be of use if you use it in a failover group with an external router via VRRP but, again, I dont see any real value to this.

I dont believe that the edge router supports openVPN.
It does, but the real star of the show is Ipsec over L2TP. its faster and easier to use.

The whole network wont go down if the VM or proxmox goes down...they just wont be able to get out to the internet.
I dont really understand why this is acceptable. your hypervisor setup, especially during design and development is FAR more likely to go down then an external router. Don't get me wrong, I am a fan of pfsense (although I would have you look at opnsense as well,) its just not a good fit for your use case.

I dont want to blow out my main NAS drives prematurely just because the NVR is running.
This is not a problem if you manage some/all your pools through proxmox. Just assign a virtual disk(s) to the NVR instance.[/quote][/quote]
 

Thanks for the helpful info!

Given the hardware I have and the things I want it to do it appears that the best approach is to let Proxmox handle all the ZFS duties and allow it access to the majority of server resources not allocated to individual VMs rather than creating a VM of FreeNAS.

I will need to play with Proxmox further to see if I can get a few things setup like email notifications for SMART failures, scheduling SMART tests and Scrubs, ZFS mirroring and snapshotting to back up the primary pool and mirror it to the backup pool, and a few other things that FreeNAS does via the GUI.

The 10Gbe card does sound like a neat idea. Maybe I can use the 10Gbe uplink ports on the switch to handle all of the VMs? Not sure this is possible but full 10Gbe switches are still stupid expensive at the moment and this would be the only device that needs to support 10Gb at the moment.

As far as the drive configuration it seems I could use my existing 6x 4TB RAIDz2 pool to run all the VMs from right now if I wanted to. I was considering using SSDs to hold the VMs but my main pool tested at over 800MB/s with dd if=/dev/zero of=/mnt/master/testfile bs=1024k count=10000 so I'm not sure a RAID 1 with a 500MB/s SSD would help in terms of speed or reliability??

When I add in the additional HD IP cams it sounds like I could easily add a dedicated ZFS pool on Proxmox then attach it as a second drive to the Windows box who would then have direct access to the storage as a single drive.

I'm still not sure what to run Proxmox on, disk wise. Should I be running it off of a USB flash drive? Small SSD? RAID of 2 drives? Is their a best practice to back up the config or the entire OS somehow?

I'm hoping to figure out the last pieces of this puzzle soon. In the mean time my plan is to boot Proxmox from a USB flash drive and then import the Backup ZFS volume and if I have to just swap USB drives to return to FreeNAS.

I'm still torn on the pfSense VM vs hardware choices. I will most likely leave it as a VM or use my i3/16gb Proxmox test box to run pfSense for now until I can grab some lower power server grade hardware.
 
The 10Gbe card does sound like a neat idea. Maybe I can use the 10Gbe uplink ports on the switch to handle all of the VMs? Not sure this is possible but full 10Gbe switches are still stupid expensive at the moment and this would be the only device that needs to support 10Gb at the moment.

Using your 10gbit connection with your 10gbit uplink port on your otherwise 1gbit switch makes sense, but will not likely get you much (why not you ask? because unless you have 30+ machines on your network all hammering I/O at the same time you're not likely to use more then 2 links worth of IO at any time; 2x gbit uplinks are usually plenty.) If you already have a switch with a 10gbit uplink- great. if not, I wouldnt waste any resources on this.

but my main pool tested at over 800MB/s with dd if=/dev/zero of=/mnt/master/testfile bs=1024k count=10000
writing sequential zeros just tells you how fast your cache is. in any case, once the OS is loaded it doesnt really read or write much so your boot device speed only affects how fast you boot. considering you reboot on failure or kernel update only (blue moon stuff) this isnt really all that important. As for reliability- any mirror will do.

I'm still not sure what to run Proxmox on, disk wise.
If your system supports it, 2x16GB SDcards work well for this, as normally you'd want your disk slots for datastore use. if you have internal disk slots, you can use those as long as you can deal with downtime to open your chassis and replace a failed drive. I assume this isnt a problem, this being a home system.

I'm hoping to figure out the last pieces of this puzzle soon. In the mean time my plan is to boot Proxmox from a USB flash drive

Thats fine, just be aware that USB thumbdrives tend to be complete shit and fail. Have a duplicate on standby- rebuilding a proxmox OS is a pita without a cluster.

I'm still torn on the pfSense VM vs hardware choices.
What are the factors causing you to want one over the other? just as a suggestion, before you say "performance" take some time to assess what your performance needs actually are, and how much time/money/effort each option represents.
 
Using your 10gbit connection with your 10gbit uplink port on your otherwise 1gbit switch makes sense, but will not likely get you much (why not you ask? because unless you have 30+ machines on your network all hammering I/O at the same time you're not likely to use more then 2 links worth of IO at any time; 2x gbit uplinks are usually plenty.) If you already have a switch with a 10gbit uplink- great. if not, I wouldnt waste any resources on this.

I am planning on purchasing a 48 port Gbe switch with 2 or 4 SFE+ ports on it. I wasn't sure if those are ONLY use to daisy chain other switches or could also be used as a normal port for any device. I'm thinking 8x 2-3MP streams from IP cameras that will have to go to the NVR VM will take up some constant bandwidth on the box for sure. Many of my photos/videos are edited remotely via SMB shares so getting as much speed to each client is important.

It's also my understanding that Proxmox can only do LACP for failover not speed so I would have to assign multiple physical NICs to a single VM and that VM would have to support LACP for multiple clients. Having one 10x pipe connected to a switch with a 70Gbe backplane should be able to handle all of the VM traffic.

If your system supports it, 2x16GB SDcards work well for this, as normally you'd want your disk slots for datastore use. if you have internal disk slots, you can use those as long as you can deal with downtime to open your chassis and replace a failed drive. I assume this isnt a problem, this being a home system.
Yes I can deal with down time. I have plenty of internal SATA ports on the motherboard as I use a PCIe SAS expander connected to a backplane in the server to attach up to 24 hot swap bays. How do you attach the SD cards? USB adapters? I have some high end 8gb SD cards that are currently useless as they are only 8gb...

Thats fine, just be aware that USB thumbdrives tend to be complete shit and fail. Have a duplicate on standby- rebuilding a proxmox OS is a pita without a cluster.
What do you mean by duplicate on standby? Using RAID or somehow cloning the drive periodically?


What are the factors causing you to want one over the other? just as a suggestion, before you say "performance" take some time to assess what your performance needs actually are, and how much time/money/effort each option represents.
Performance is a big factor. I've been using the Alix boards, which are not upgradable, for years. As soon as I moved to a VM everything was snappier. pfSense sells dedicated hardware that is super over-priced. You can buy the same hardware they are selling for half the price and load pfSense yourself. As I said earlier, I need to run two separate OpenVPN configurations, one where pfSense is the client and connects to a remote host, and one where it is the server and I connect to it. It's also running Snort and who knows what else in the future. That said, a single core processor an 8GB of ram is plenty. I also like the idea that I could add additional NICs in the future with real hardware.

My top choice is the 8-core Atom box Supermicro sells and pfSense uses for their high end router. I'm guessing it will be around $600-700 to build one. My i3/16gb test box that was my previous FreeNAS box will work fine for the time being with 3x intel NICs. Switching to supermicro hardware lets me monitor temps and remote reboot the router which is great.

Fortunately I live in an area where they are currently running Google and AT&T fiber everywhere so I'm hoping to have hardware that can handle that future transition.
 
It's also my understanding that Proxmox can only do LACP for failover not speed so I would have to assign multiple physical NICs to a single VM and that VM would have to support LACP for multiple clients. Having one 10x pipe connected to a switch with a 70Gbe backplane should be able to handle all of the VM traffic.
Not true. all linux bonding modes are supported and there are use cases for NON lacp modes being preferable (balance-alb is often more efficient, and LACP cannot be used with multiple switches.) Bear in mind that NO bonding mode will make the OTHER side faster; if you're connecting a client with a single gigabit link, it still can only achieve a maximum 1gbit throughput. As for your VM traffic- it makes no difference what the switch backplane is capable of, all of them are sharing your one 10gbit link (or bond, whatever you provision from the hypervisor to the outside world.) The operative bit of information is what your client machines are doing, not the server- eg, for your cameras, have a look at their stream bitrate. my guess is that all of them together account for ~30mbit TOTAL.

How do you attach the SD cards? USB adapters? I have some high end 8gb SD cards that are currently useless as they are only 8gb...
Many server class hardware designed for NAS or hypervisor use have sdcard slots built in. Don't know about your particular box. 8gb is enough for proxmox, incidentally. if you would use a single usb device dont raid it. periodically clone it.

Performance is a big factor...(snip)
So you defined one metric by which you're measuring- OpenVPN performance. what was the VPN throughput performance of the alix boards? what are you getting with pfsense on general purpose hardware (and what hardware?) I'm not too familiar with alix products but from what I understand they should handily trounce any PC in ssl operations. Is it possible they are misconfigured? This dovetails into my next point. Generally speaking, purpose built asics will run circles around general purpose CPUs when used for their intended purpose. If you're expecting SSL performance in the >80mbit range, you probably want a real router. have a look at mikrotik, ubiquiti (not erx), etc.

I also like the idea that I could add additional NICs in the future with real hardware.
What would be the use case? if you just want more networks you can always create vlans... this is true regardless of what kind of router you have. remember, you cant create "performance" when you're limited by the slowest part of your bridge- in this case, your internet.
 
Thanks for all the replies. I think I have most of the items sorted out enough that I can test the hardware shortly.
I was able to create a single-drive ZFS pool on the FreeNAS box, detach the pool, plug it into the Proxmox box, use zpool import to import the pool, then "Add storage" to get the drive to show up in the Proxmox GUI.

From there I installed samba on Proxmox directly and created a samba user and pointed it to the mounted ZFS pool I just attached in Proxmox. Now I can SMB://proxmox_address and access the data that I wrote to the pool back when it was attached to FreeNAS!

Based on this info I should be able to eject the existing disks pools from FreeNAS, install proxmox in it's place, then import the pools and relink the smb shares to the specific datasets on the pool. FreeNAS makes this very easy using the GUI but it doesn't look like I have any other option.

I was also able to mount specific /pool/dataset/ as a drive in Proxmox and then create additional virtual drives in that space and then attach those drives to existing VMs.

I'm still not sure how I can migrate my existing VMs to the zpool storage or how to backup the proxmox install to the zpool ?
 
I'm still not sure how I can migrate my existing VMs to the zpool storage or how to backup the proxmox install to the zpool ?
Provided the storage for existing VM's and the new zpool storage is available from within proxmox you can use the 'Move disk' option from the 'Hardware' tab. Highlight the desired disk and press 'Move disk'.
 
Provided the storage for existing VM's and the new zpool storage is available from within proxmox you can use the 'Move disk' option from the 'Hardware' tab. Highlight the desired disk and press 'Move disk'.
Thanks! This worked perfectly!

I think this is how my hardware will be configured...

Proxmox Server
  • 2x 8-core, E5-2670 Xenon, 32 virtual cores total
  • 64GB RAM
  • 6x 4TB RAIDz2 (Master)
  • 6x 2TB RAIDz2 (Backup)
  • 6x Intel Gbe NICs
  • 16GB USB drive for OS

Proxmox Storage

16GB USB
  • Proxmox OS and configs

Master ( 6x4TB RAIDz2)
  • - Media * SMB share
    • TV
    • Movies
    • Music
    • Photos
    • Apps
  • Photovideo * SMB share
    • Photos
    • Videos
  • Backups
    • PC1
    • PC2
  • NVR
    • Video (VM HDD)
  • Proxmox
    • VMs
      • Router
      • NVR (+NVR/Video)
      • Torrent (*Media)
      • Plex (*Media)
      • Crashplan (+Backups)
      • Nagios
    • ISOs
    • VMbackups

Backup (6x2TB RAIDz2)
  • Identical ZFS copy of Master

- Using this configuration all data including the VMs will be stored on the Master ZFS pool which will be mirrored to the Backup pool in real time.
- Snapshots of the Master pool will also be taken on regular intervals
- The router will stay as a VM for now until I can decide on dedicated hardware


A few more questions about this setup,

- Proxmox will run the SAMBA server to serve the Media and PhotoVideo shares on the LAN (I assume this is the right way to do it??)

- Not sure how things like Plex can access /Master/Media/Movies/somemovie.avi at "faster than LAN speeds" eg I dont want Plex to use up LAN bandwidth reading from the SMB Media share and sending video to a client. Plex would be running on a VM and the SMB share is running directly on Plex with the files in the /Media/ dataset. Currently Plex runs in a Jail inside FreeNAS so it has direct access to the files.

This article talks about switching over many of the automated features that FreeNAS offers to Proxmox which seems to cover my other concerns. https://www.reddit.com/r/freenas/comments/3d9o8c/freenas_under_proxmox/
 
- Proxmox will run the SAMBA server to serve the Media and PhotoVideo shares on the LAN (I assume this is the right way to do it??)
If you were using Proxmox as a dedicated hypervisor it wouldnt be. For your use case it works just fine.

- Not sure how things like Plex can access /Master/Media/Movies/somemovie.avi at "faster than LAN speeds" eg I dont want Plex to use up LAN bandwidth reading from the SMB Media share and sending video to a client. Plex would be running on a VM and the SMB share is running directly on Plex with the files in the /Media/ dataset. Currently Plex runs in a Jail inside FreeNAS so it has direct access to the files.
You can do the same thing with Proxmox- run plex in a container (Linux Containers=BSD jails.)
 
Thanks for all the replies. I think I have most of the items sorted out enough that I can test the hardware shortly.
I was able to create a single-drive ZFS pool on the FreeNAS box, detach the pool, plug it into the Proxmox box, use zpool import to import the pool, then "Add storage" to get the drive to show up in the Proxmox GUI.

From there I installed samba on Proxmox directly and created a samba user and pointed it to the mounted ZFS pool I just attached in Proxmox. Now I can SMB://proxmox_address and access the data that I wrote to the pool back when it was attached to FreeNAS!

Based on this info I should be able to eject the existing disks pools from FreeNAS, install proxmox in it's place, then import the pools and relink the smb shares to the specific datasets on the pool. FreeNAS makes this very easy using the GUI but it doesn't look like I have any other option.

I was also able to mount specific /pool/dataset/ as a drive in Proxmox and then create additional virtual drives in that space and then attach those drives to existing VMs.

I'm still not sure how I can migrate my existing VMs to the zpool storage or how to backup the proxmox install to the zpool ?

Here are a list of YouTube videos on ZFS that may give you some assistance. I appreciate you asking the initial questions because I have almost the same identical use case for a home server setup.

https://www.youtube.com/playlist?list=PLJyjAoTcTjm3fm7h7wVb6T4hc5gbOufhd
 
If you were using Proxmox as a dedicated hypervisor it wouldnt be. For your use case it works just fine.

You can do the same thing with Proxmox- run plex in a container (Linux Containers=BSD jails.)
Thanks!
I didn't look into LXC much but now I have.
I was able to setup a LC for Plex, Crashplan, and SAMBA. I was able to mount any existing host-mounted directory inside the OS by doing the following.

Assuming, 105 is your LC you created, proxmostest is the zpool, and /stuff is a dataset in that pool that is currently mounted on the Proxmox host.

Mount the Host directory inside the LXC - https://pve.proxmox.com/wiki/LXC_Bind_Mounts
cd /var/lib/lxc/105/rootfs/
mkdir stuff

nano /etc/pve/lxc/105.conf
add this

mp0: /proxmoxtest/stuff,mp=/stuff
mp1:/Some_Other_Mount

Reboot after changes are made

I will most likely rebuild my other servers as LXC with the exception of the Windows 10 VM I need for the NVR software.

After some additional testing I will be running Proxmox on the Supermicro box with mostly LXC with all data and VMs stored on the RAIDz2 along with backups of the proxmox configs. From there I should be able to adjust the cores/memory as needed, add additional servers/services, and truly have an All-In-One home server that's not locked into specific hardware.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!