zfs vol in a new lxc

mkyb14

Well-Known Member
Mar 17, 2017
56
0
46
39
Trying to understand what to do here, it's been since covid started and I've been putting this off for a while.

I have the latest proxmox installed with a zfs pool, created some pools and installed turnkey file server.... some how it got jacked up and I don't know why (literally never touched it as it's meant to be a big dump bucket backup). happened after a random server reboot for something (again time is my enemy here).

fast forward and my brain was thinking to just do another installation and create a new mountpoint at the same target, but nothing shows.... what am I over looking here? I was probably under the wrong impression that creating a zfs pool with data sets within it was agnostic to what was pointing to it... but I'm here to learn, so what am I over looking?
 

Attachments

  • Screen Shot 2021-01-10 at 3.01.51 PM.png
    Screen Shot 2021-01-10 at 3.01.51 PM.png
    128.9 KB · Views: 6
I don't really get what your problem is.
Did you already reinstalled proxmox?
Are you talking abount bind-mounting datasets into LXCs or are you talking about the dataset that the LXC uses as primary storage?
Why don't you just import an backup of that LXC to restore it including your dataset?
 
So the deeper explanation is that it's the same proxmox install. What happened is I had to reboot for some reason and the turnkey file server lxc wasn't able to boot (no idea). So knowing I had created a zfs file store (mount point) for that lxc, I figured I'd just spin up a new turnkey lxc with the same settings and re-mount that same mount point?

This is where I'm not sure what to do, since when clicking in the new turnkey file server a mountpoint, I can't select the previous one ... so how to mount a mountpoint from another lxc in a new one is the best way I can think to describe it.
 
Did you checked if your "zfs file store" is working on the host? If my LXCs don't want to start anymore, it is because the bind-mounts can't be mounted anymore. Samba shares not mounted yet and so on.
You can edit the "/etc/pve/lxc/IdOfYourLXC.conf" (for example /etc/pve/lxc/100.conf) and look at or change the line with the bind-mount:
"mp0: /mnt/bindmounts/shared,mp=/shared"
 
so maybe I have a complete misunderstanding of ZFS and how I setup my system. I was probably moving too fast and not comprehending the setup. My initial impression is that with Proxmox, I was able to setup a zfs pool and from there sub volumes.

SSD has iso and vm's on it
/Bigdata (10x10tb raidz2+2)
/Apollo (5x4tb raidz2+1)

Bigdata is the main file store (movies, music etc) /Bigdata/Movies /Bigdata/TV_Shows
Apollo was for vm snapshots and backup

Create a turnkey file server on SSD, point it to a mount point on bigdata and do snapshots and back up to the secondary zfs Apollo.
Here's where I think my understanding went wrong. In creating the Bigdata share, I was thinking of it like a NAS, just a mountable file store, that any LXC / VM could be pointed at. If that's not the case, do I need to rethink my setup?

Really just looking to have it act like a SAN/NAS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!