So my goal here is to have a RAID of some kind set up as a samba share that VMs and CTs can access as well as Windows users on my home network (PLEX media and general storage). I'm not sure what the best way to do this is. Is NAS within ProxMox not really a good idea? Currently I have ProxMox installed on a single drive, and have four larger drives I want to use as NAS.
I have tried to pass through physical drives to a container with the intent of using mdadm, but I haven't had any success getting the container to recognize the drives. I followed this thread:
https://forum.proxmox.com/threads/lxc-cannot-assign-a-block-device-to-container.23256/
Running the first command "# lxc-device add -n 102 /dev/sdb" in the node shell does pass the drive through, but like was mentioned in the thread, it isn't persistent. I tried following what the original poster did, with the updates from this thread (some commands have changed over the years):
https://forum.proxmox.com/threads/container-with-physical-disk.42280/
Still no luck. With all this set up, my container will not start.
So, I went back to playing with ZFS, something I have zero experience with. My initial problem was with using different size drives in the array (3@3TB and 1@4TB). I did eventfully create 3TB partitions on all four drives and created a ZFS pool with the partitions (at the moment I'm not worried about the 1TB loss on the larger drive). My other problem with ZFS is that I cant expand the array down the road. I can only expand the pool with more arrays. I could probably deal with this, but the show stopper is when I mount the pool in multiple containers. When a change is made to the drive in one container, it isn't seen in the other. And if I unmount the pool and remount it, the drive is empty. I clearly don't understand how this file system works. I am also hearing that if I have to change hardware, or move the drives to a different system for whatever reason, I will loose all my data. If that is true, it's a HUGE downside to ZFS that I don't want to risk. My previous setup (mdadm) went through 3 different motherboards and zero data loss.
I don't know where to go from here, aside from going back to Ubuntu Server. I'm not set on any particular method. I just need it to be expandable in some way, shared between CTs and VMs (preferably local, but samba or something similar is fine), accessible across the network (again, probably samba), and robust.
I've already backed up and wiped my old mdadm RAID, so there's currently no risk of loosing anything important, as long as my backups don't fall apart, lol.
Any help is greatly appreciated.
I have tried to pass through physical drives to a container with the intent of using mdadm, but I haven't had any success getting the container to recognize the drives. I followed this thread:
https://forum.proxmox.com/threads/lxc-cannot-assign-a-block-device-to-container.23256/
Running the first command "# lxc-device add -n 102 /dev/sdb" in the node shell does pass the drive through, but like was mentioned in the thread, it isn't persistent. I tried following what the original poster did, with the updates from this thread (some commands have changed over the years):
https://forum.proxmox.com/threads/container-with-physical-disk.42280/
Still no luck. With all this set up, my container will not start.
So, I went back to playing with ZFS, something I have zero experience with. My initial problem was with using different size drives in the array (3@3TB and 1@4TB). I did eventfully create 3TB partitions on all four drives and created a ZFS pool with the partitions (at the moment I'm not worried about the 1TB loss on the larger drive). My other problem with ZFS is that I cant expand the array down the road. I can only expand the pool with more arrays. I could probably deal with this, but the show stopper is when I mount the pool in multiple containers. When a change is made to the drive in one container, it isn't seen in the other. And if I unmount the pool and remount it, the drive is empty. I clearly don't understand how this file system works. I am also hearing that if I have to change hardware, or move the drives to a different system for whatever reason, I will loose all my data. If that is true, it's a HUGE downside to ZFS that I don't want to risk. My previous setup (mdadm) went through 3 different motherboards and zero data loss.
I don't know where to go from here, aside from going back to Ubuntu Server. I'm not set on any particular method. I just need it to be expandable in some way, shared between CTs and VMs (preferably local, but samba or something similar is fine), accessible across the network (again, probably samba), and robust.
I've already backed up and wiped my old mdadm RAID, so there's currently no risk of loosing anything important, as long as my backups don't fall apart, lol.
Any help is greatly appreciated.