RAID/Samba Share HELP!!

JayK

Member
Jun 11, 2022
8
1
8
So my goal here is to have a RAID of some kind set up as a samba share that VMs and CTs can access as well as Windows users on my home network (PLEX media and general storage). I'm not sure what the best way to do this is. Is NAS within ProxMox not really a good idea? Currently I have ProxMox installed on a single drive, and have four larger drives I want to use as NAS.

I have tried to pass through physical drives to a container with the intent of using mdadm, but I haven't had any success getting the container to recognize the drives. I followed this thread:

https://forum.proxmox.com/threads/lxc-cannot-assign-a-block-device-to-container.23256/

Running the first command "# lxc-device add -n 102 /dev/sdb" in the node shell does pass the drive through, but like was mentioned in the thread, it isn't persistent. I tried following what the original poster did, with the updates from this thread (some commands have changed over the years):

https://forum.proxmox.com/threads/container-with-physical-disk.42280/

Still no luck. With all this set up, my container will not start.

So, I went back to playing with ZFS, something I have zero experience with. My initial problem was with using different size drives in the array (3@3TB and 1@4TB). I did eventfully create 3TB partitions on all four drives and created a ZFS pool with the partitions (at the moment I'm not worried about the 1TB loss on the larger drive). My other problem with ZFS is that I cant expand the array down the road. I can only expand the pool with more arrays. I could probably deal with this, but the show stopper is when I mount the pool in multiple containers. When a change is made to the drive in one container, it isn't seen in the other. And if I unmount the pool and remount it, the drive is empty. I clearly don't understand how this file system works. I am also hearing that if I have to change hardware, or move the drives to a different system for whatever reason, I will loose all my data. If that is true, it's a HUGE downside to ZFS that I don't want to risk. My previous setup (mdadm) went through 3 different motherboards and zero data loss.

I don't know where to go from here, aside from going back to Ubuntu Server. I'm not set on any particular method. I just need it to be expandable in some way, shared between CTs and VMs (preferably local, but samba or something similar is fine), accessible across the network (again, probably samba), and robust.

I've already backed up and wiped my old mdadm RAID, so there's currently no risk of loosing anything important, as long as my backups don't fall apart, lol.

Any help is greatly appreciated.
 
So my goal here is to have a RAID of some kind set up as a samba share that VMs and CTs can access as well as Windows users on my home network (PLEX media and general storage). I'm not sure what the best way to do this is. Is NAS within ProxMox not really a good idea? Currently I have ProxMox installed on a single drive, and have four larger drives I want to use as NAS.
If you are fine with doing everything with CLI you could just install the samba server on your PVE host. PVE is a full OS based on Debian using a modified Ubuntu LTS kernel. So everything you can do on a Ubuntu/Debian server you can also do directly on your PVE host.
If you want a GUI without ZFS an OpenMediaVault LXC (with bind-mounts) or VM (with disk passthrough) would be an option.
Or if you want ZFS a TrueNAS VM with disk passthrough.
My other problem with ZFS is that I cant expand the array down the road. I can only expand the pool with more arrays.
You can expand it. With a raidz1/2/3 (raid5/6) you can add single drives but this will only add capacity and won't increase performance nor lower parity ratio. Or you could add a whole new raidz1/2/3 vdev by adding another 4x 3TB disks. Then you would get 4x 3TB raidz1 + 4x 3TB raidz1 striped together as a single big pool with double the performance and capacity. With a striped mirror (raid10) you can do the same by adding new disks in pairs of 2.
I could probably deal with this, but the show stopper is when I mount the pool in multiple containers. When a change is made to the drive in one container, it isn't seen in the other. And if I unmount the pool and remount it, the drive is empty. I clearly don't understand how this file system works.
You can't mount the same HDD in multiple systems. HDDs are block devices and blockdevices should only accesses by one system. Otherwise you currupt your data when writing to it. See it like this: HDDs aren't meant to be installed in two computers at the same time. What you could do is creating a ZFS pool (or even a mdadm array...but that isn't officially supported but works fine) on your host, mounting it on your host (so the host is the only system mounting it) and then use bind-mounts to bring folders from the host into the LXCs: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
Then the LXCs are just working on the filesystem level and there it's fine when multiple LXCs access the same folder in parallel.
I am also hearing that if I have to change hardware, or move the drives to a different system for whatever reason, I will loose all my data. If that is true, it's a HUGE downside to ZFS that I don't want to risk.
That isn't true if you do it right (for example not using ZFS ontop of HW raid and and not using any virtualization/abstraction layer between ZFS and the disks). But keep in mind that a raid never replaces a backup. You should have a backup of all data anyway, so it wouldn`'t really matter (except for additional work and downtime) if you loose a complete pool.
 
Last edited:
Running NAS services on Proxmox is quite possible and it's what I do on my home setup. Some people have a preference for keeping the host system 'pure' but I've never seen any official guidance or recommendation against running samba on the host.

In my setup, I have various 'shares' available to my network users including media files which they can also access via Plex which I have running as a container on the host. I don't find I have much need for transcoding so I don't need to run Plex as a VM with GPU passthrough but that's an option if you need it.

I find it works quite well for me and as the storage is on the proxmox host it's very flexible in terms of deploying resources. Containers can access file systems on the host system directly if you need that but data sharing between VM's is probably best done via network shares

A ZFS pool can be exported and imported between systems so don't have any concern about losing data and it's quite robust and reliable. Do setup email on the system so you can be alerted if any of your discs develop faults. Pool expansion is a bit of a pain. As you say, you either need to add VDEVs or replace each of your drives one-by-one with larger models, once all the drives have been replaced, then the pool will re-size.
 
  • Like
Reactions: nick.kopas
Last edited:
I also thought about getting something like this so I could merge my three homeserver (8SSD/2HDD + 2SSD/4HDD + 10SSD/4HDD) into one single server: https://www.alternate.de/Inter-Tech/4F28-MINING-RACK-Server-Gehäuse/html/product/1778205
There you got 28x 3.5" slots + 6x 2.5" slots for 135€ and it should be quiet after replacing the 6x 120mm stock fans with quieter ones. Missing hotswap wouldn't be big problem here as downtime isn't a problem. I'm more concerned about HDD life expectation as there is nothing done against vibrations and I got no HDDs that are rated for any multidisk usage. And then there is still the problem to find a ATX PSU that can handle so much drives. Usually they don't got enough power on the 5V rail and 28x IDE or 62x SATA (when using 28x '3.5" to dual 2.5" cages' + 6x 2.5") power plugs isn't common either...
Best PSU I found so far is the "FSP1200-50AAG 1200W" with its 30A on the 5V rail (so up to 4.4W @ 5V per disk (34 disks) or 2.4W @ 5V per disk (62 disks)) and 14x SATA + 16x IDE power plugs. But that again is 220€ + extra for the sata cables and Y power cables.
And according to my SSDs datasheet each SSD might use 4.8W on the 5V rail. So even that PSU wouldn't allow me to use all of the cases slots. And looks like no PSU manufactured after early 2000s by well-known manufacturers got more than 35A on the 5V rail.
Same problem when looking for replacement PSUs for my retro computers from around the year 2000. Back then the CPU was running of 5V so the PSUs got alot of power on the 5V rail but after they switched the CPUs from 5V to 12V power in the early 2000s the PSUs reduced the 5V power. And no matter if you buy a 1600W or a 300W PSU. Only thing that changes is the power on the 12V rail with no difference on the 5V rail.
 
Last edited:
If you are fine with doing everything with CLI you could just install the samba server on your PVE host. PVE is a full OS based on Debian using a modified Ubuntu LTS kernel. So everything you can do on a Ubuntu/Debian server you can also do directly on your PVE host.
Some people have a preference for keeping the host system 'pure' but I've never seen any official guidance or recommendation against running samba on the host.
I don't have a problem with running samba on the host. I guess I just made the assumption that you wouldn't want to run anything on the host. It seems to defeat the purpose of virtualization. My previous setup was a headless system running Ubuntu Server, so I have no problem using CLI.
You can't mount the same HDD in multiple systems. HDDs are block devices and blockdevices should only accesses by one system. Otherwise you currupt your data when writing to it. See it like this: HDDs aren't meant to be installed in two computers at the same time. What you could do is creating a ZFS pool (or even a mdadm array...but that isn't officially supported but works fine) on your host, mounting it on your host (so the host is the only system mounting it) and then use bind-mounts to bring folders from the host into the LXCs: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
Then the LXCs are just working on the filesystem level and there it's fine when multiple LXCs access the same folder in parallel.
I had not looked at it like that, but not that you say it, it seems obvious.
This definitely pointed me in the right direction. I thought I was mounting the file system, but after doing some more searching, I was obviously mistaken. My problem was not knowing how to properly set up ZFS. Now that it is set up and mounted correctly it works exactly how I wanted. I'm not fully up and running yet (limited time today to mess with it), but this was my big hurdle.

One more thing though. If I do end up needing to move my ZFS pool to another system, is there a some type of superblock, or something similar, on the HDDs? Will ZFS somehow recognize that there is a pool set up on the drives, and allow me to mount to the new system? I understand how to reconstruct an mdadm raid from existing drives. How do I do this with ZFS? I do my best to keep proper backups, but not having to use them is always ideal, and faster.

Thanks so much for the help.
 
One more thing though. If I do end up needing to move my ZFS pool to another system, is there a some type of superblock, or something similar, on the HDDs? Will ZFS somehow recognize that there is a pool set up on the drives, and allow me to mount to the new system? I understand how to reconstruct an mdadm raid from existing drives. How do I do this with ZFS? I do my best to keep proper backups, but not having to use them is always ideal, and faster.

Thanks so much for the help.
All Pool metadata is stored on the disks. You just plug the disks into a new machine that got a ZFS of atleast the same version. A zpool import then should find and list your pool. You can then import that pool by running zpool import YourPoolName.
 
All Pool metadata is stored on the disks. You just plug the disks into a new machine that got a ZFS of atleast the same version. A zpool import then should find and list your pool. You can then import that pool by running zpool import YourPoolName.
This makes me much more comfortable with ZFS. So far everything else is going well. I am currently restoring my old RAID back up on the new ZFS/SMB share.

Thanks again for all your help.
 
If you are fine with doing everything with CLI you could just install the samba server on your PVE host. PVE is a full OS based on Debian using a modified Ubuntu LTS kernel. So everything you can do on a Ubuntu/Debian server you can also do directly on your PVE host.
If you want a GUI without ZFS an OpenMediaVault LXC (with bind-mounts) or VM (with disk passthrough) would be an option.
Or if you want ZFS a TrueNAS VM with disk passthrough.

You can expand it. With a raidz1/2/3 (raid5/6) you can add single drives but this will only add capacity and won't increase performance nor lower parity ratio. Or you could add a whole new raidz1/2/3 vdev by adding another 4x 3TB disks. Then you would get 4x 3TB raidz1 + 4x 3TB raidz1 striped together as a single big pool with double the performance and capacity. With a striped mirror (raid10) you can do the same by adding new disks in pairs of 2.

You can't mount the same HDD in multiple systems. HDDs are block devices and blockdevices should only accesses by one system. Otherwise you currupt your data when writing to it. See it like this: HDDs aren't meant to be installed in two computers at the same time. What you could do is creating a ZFS pool (or even a mdadm array...but that isn't officially supported but works fine) on your host, mounting it on your host (so the host is the only system mounting it) and then use bind-mounts to bring folders from the host into the LXCs: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
Then the LXCs are just working on the filesystem level and there it's fine when multiple LXCs access the same folder in parallel.

That isn't true if you do it right (for example not using ZFS ontop of HW raid and and not using any virtualization/abstraction layer between ZFS and the disks). But keep in mind that a raid never replaces a backup. You should have a backup of all data anyway, so it wouldn`'t really matter (except for additional work and downtime) if you loose a complete pool.
I have never thought about OMV in an LXC before, that’s a good idea, much cleaner than installing things on host or a weighty VM for a basic task. I might give that a try myself! Thanks!
 
I usually use a dedicated machine with unRaid, I have now moved this into a vm with passthrough for nvidia (transcoding) and a 8 LSI port sata adapter with 40TB of disk. unRaid manages media shares (samba/nfs). I have now moved opnsense, home assistant, mqtt and a few other items to LXC and VMs.
I will move other Docker containers out from unRaid to Proxmox where it makes sense.
All working really well
 
Last edited:
  • Like
Reactions: Whitterquick

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!