Migration of RAID1 ZFS to Proxmox

nasach

New Member
Aug 14, 2023
12
1
3
Hello everyone,

I've been using TrueNas Core/Scale(After upgrade) as my NAS solution for the last two years. I've also recently built a PVE on a separate system to which I've migrated all my services / apps to in the last few days. I'm at the point where I want to know migrate my ZFS pool from my TrueNas Scale server to a second node in my Proxmox Cluster. I'm having a hard to time finding any guidelines or tutorials in how to do this but I've managed so far to mount this pool in my second node and I am able to browse the contents via shell. Could anyone help me figure out how do I make my pool available for other containers (some that immediately I want to implement to host Photoprism, Plex and nextcloud) and perhaps return the ability for me to browse my data via Samba from my other devices or my personal computer like I would with TrueNas before?

PS: I am not interested in making a VM or container with TrueNas since I've seen a lot of that floating around.

Any help is greatly appreciated, and pardon my English it is not my primary language.
 
I would like to keep the data in the zpool but make it accessible / available for my containers (e.g. container w/ samba). I've already managed to add it to my data center node via datacenter> storage > add and when I run the commands in shell I am able to see it:

zpool status
1692008021114.png

zpool list
1692008060546.png

zfs list -r horde yields my filesystem just like I had it in TrueNas
 
You need to check where the horde pool is mounted, yet after that you just need to bind-mount the data to your containers and fix your permissions (most probably don't they match).
 
  • Like
Reactions: nasach
Thank you @LnxBil - I'm not sure how do I find out where it would be mounted? As far as I can tell - I've added the raid pool to one of my nodes via the GUI on Datacenter > Storage > Add. If I inspect this in the GUI, there is no path/target value:

1692016851775.png
This ZFS storage is listed under my second node pve2:

1692016867825.png

It shows active and correct poll of Usage data:

1692016910595.png


As for the bind-mount, I've not used/done this before but from the documentation I understand I would have to use this command in shell?

pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared

I'll take my fileserver container as an example so would it be something like this...?

pct set 103 -mp0 mp=/mnt/horde ,/mnt/horde

I'm not sure if mp=/mnt/horde is correct - how could I confirm this?
 

Attachments

  • 1692017014094.png
    1692017014094.png
    21.3 KB · Views: 0
And PVE doesn't come with NAS functionalities. If you want SMB/NFS you would need to manually install and set up a NFS/SMB server via CLI.
You might also want to google for ZFSs "sharenfs" and "sharesmb" properties.

I'm not sure how do I find out where it would be mounted?
zfs get mountpoint

As for the bind-mount, I've not used/done this before but from the documentation I understand I would have to use this command in shell?
See for unprivileged LXCs: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
 
Last edited:
  • Like
Reactions: nasach
And PVE doesn't come with NAS functionalities. If you want SMB/NFS you would need to manually install and set up a NFS/SMB server via CLI.
You might also want to google for ZFSs "sharenfs" and "sharesmb" properties.


zfs get mountpoint


See for unprivileged LXCs: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers

Thank you!

I do not want the same functionalities with PVE but just to avoid having to use a NAS OS to access my data, and I want the ability to access this data as well as generate more from my LCXs. I will try my hand using the bind mount.

As for what I have setup to test this - I have a fileserver with Cockpit / Navigator / File Share. Will post back in here if it works!
 
Thank you so much for your help @Dunuin and @LnxBil! I was able to bind-mount this to one of my containers and confirmed I can access it from within the container via Cockpit Navigator:

Here is the command that I utilized:

1692025547098.png


Results:

1692025666678.png

Posting this here for anyone in the future who runs into the same situation. Thanks again!
 
I am running into another problem now. I was able to get cockpit navigator setup and share via SMB to access with my personal computer. I see all the different data set "folders" from windows but I cannot see any of my data within any of these folders. Any ideas as to why this might be?
 
It looks like I had made a mistake when bind mounting and I did not mount the actual dataset but rather the pool path. I changed my approach a bit and setup Turnkey File Server within a container to be my NAS since it seem simple and easy enough to use.

  1. I setup a privileged container with Turnkey Linux File Server.
  2. I made changes via shell to the container config nano /etc/pve/lxc/<your container ID>.config file to add in my data sets after figuring out each mount point via zfs list
  3. I added three mounting points, mp0, mp1 and mp2 for each data set I wanted to mount.
  4. Restart the container and access Webmin > Tools > File Manager > /mnt and I was able to locate all my data sets and see all my data intact therein!

Now my only question is... will I be able to do this for multiple containers to the same data set without causing issues? I'd like to setup other containers that can access the same data sets as the file manager.
 
Now my only question is... will I be able to do this for multiple containers to the same data set without causing issues? I'd like to setup other containers that can access the same data sets as the file manager.
If you don't run into ownership problems, yes. Then the files need to be owned by the same UIDs/GIDs unless you want to chmod 777 so everyone got rights to do everthing, which wouldn't be great for security.
 
Last edited:
  • Like
Reactions: nasach
If you don't run into ownership problems, yes. Then the files need to be ownen by the same UIDs/GIDs unless you want to chmod 777 so everyone got rights to do everthing, which wouldn't be great for security.

I managed to figure out the ownership setup with just using Turnkey's user and groups for linux and for the SAMBA. Got access via SAMBA and also was able to mount this with just read access for a separate privileged container. I was able to get plex up in that container and read from the mounted data set.

Thank you all for helping!!!
 
If you don't run into ownership problems, yes. Then the files need to be owned by the same UIDs/GIDs unless you want to chmod 777 so everyone got rights to do everthing, which wouldn't be great for security.
Well.. I'm having ownership problems but not with the previous container. I wanted to make a unpriviledge container and I am not able to have write access to mounted paths. I was reading through the doc in https://pve.proxmox.com/wiki/Unprivileged_LXC_containers and I made the changes suggested by adding this line of code to my conf file for the container:

# uid map: from uid 0 map 1005 uids (in the ct) to the range starting 100000 (on the host), so 0..1004 (ct) → 100000..101004 (host) lxc.idmap = u 0 100000 1005 lxc.idmap = g 0 100000 1005 # we map 1 uid starting from uid 1005 onto 1005, so 1005 → 1005 lxc.idmap = u 1005 1005 1 lxc.idmap = g 1005 1005 1 # we map the rest of 65535 from 1006 upto 101006, so 1006..65535 → 101006..165535 lxc.idmap = u 1006 101006 64530 lxc.idmap = g 1006 101006 64530

After I run my command for starting the lxc I get the following error:

lxc-start: 106: ../src/lxc/conf.c: lxc_map_ids: 3701 newuidmap failed to write mapping "newuidmap: uid range [0-1005) -> [100000-101005) not allowed": newuidmap 2575384 0 100000 1005 lxc-start: 106: ../src/lxc/start.c: lxc_spawn: 1788 Failed to set up id mapping. lxc-start: 106: ../src/lxc/start.c: __lxc_start: 2107 Failed to spawn container "106" lxc-start: 106: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start lxc-start: 106: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

I've been scratching my head all night trying to figure out all these permissions. I would not want to just 777 as I've read it's a huge security risk.

Any ideas as to what I could do?
 
Any ideas as to what I could do?
If you only have to bind-mount the data to one container, you can just do the mapping manually and once by looping over all uids with such a command (please test with some testfiles before destroying your permission structure)

Code:
find <path> -uid 1000 -exec chown 10001000 {} \;

You want to do some space-aware commands, yet this is only to show the general principle.
This has to be done manually for all group id's aswell.
 
  • Like
Reactions: nasach
If you only have to bind-mount the data to one container, you can just do the mapping manually and once by looping over all uids with such a command (please test with some testfiles before destroying your permission structure)

In this case I have it bind-mount to 1 LXC unpriviledged and to a LXC priviledged, having issues with the latter. I went through the doc https://pve.proxmox.com/wiki/Unprivileged_LXC_containers to the detail and this ended up with the error I posted,

You want to do some space-aware commands, yet this is only to show the general principle.
This has to be done manually for all group id's aswell.

I just managed to get past it by fixing my /etc/subuid and /etc/subgid (was missing some lines). I am able to start the LXC Priviledged, I am able to create a directory in the mounted path.


Looks like my previous issue is now solved but now I'm having issues with the containers not being able to access the mounted path (radarr via docker-compose). I'm guessing now I have to figure out the container permissions therein..?
 
I've resolved all my issues. Here is what I've done (pardon any language mistakes)

I scrapped my whole approach and decided to create un-privileged LXCs with the guide https://pve.proxmox.com/wiki/Unprivileged_LXC_containers. Each container that requires access to any location of my ZFS pool will have an mp0: /location/to/data,mp= /mnt/target along with the ID mapping provided in the guide. I created an LXCs per service that would require access in any form to write/read from my ZFS pool. This just helps me keeping everything tidy without surprises.

On any container that was touching the same data as another container, with different users and permissions, I have a cronjob that will apply chmod 775 just to make sure the files are accessible across them.

I still kept one container on one of my nodes to host docker for any service that does not require external permissions to the LXC or storage (e.g. Heimdall).

Thank you for all the help @LnxBil and @Dunuin !!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!