[SOLVED] Accessing existing ZFS pool from VM

piddy

New Member
Feb 13, 2021
7
1
3
41
Apologies if this is a duplicate of another post - I couldn't find an answer on the forum or in the docs...

On the host machine (running proxmox), I've imported a zfs pool (let's call it 'TANK') that I moved from a previous build (FreeNAS) using the following command in proxmox:
zpool import -f TANK

It's made from 10 2.7TB disks in RAIDZ2 configuration. It contains about 7TB of media files that I want to be able to access from a guest running ubuntu server 20.04 LTS.

My question is, what's the 'best' way to do that? By 'best', I'm aiming for a solution that:
1) Doesn't lose any of the information already on TANK
2) Makes the existing files and directories on TANK available to the ubuntu host for it to create, read, update and delete
3) Doesn't compromise the data integrity delivered by zfs
4) Maximises read and write speeds

I've successfully added this pool as storage within the GUI...
'Datacenter'>'Storage'>'Add'>'ZFS', with options
  • 'ID': TANK
  • 'ZFS Pool': TANK
  • 'Content': Disk image
  • 'Nodes'=>All (No restrictions)
  • 'Enable': selected (checked)
  • 'Thin provision': selected (checked)
  • 'Block Size': 4k
I've added a hard disk to the virtual machine...
'VM 100'>'Hardware'>'Add'>'Hard disk', with options
  • Bus/Device: VirtIO Block
  • Storage: TANK:vm-100-disk-2
  • Disk size (GiB): 20000
  • Cache: Default (No cache)
  • Discard: not selected (unchecked)
  • SSD emulation: not selected (unchecked)
  • IO thread: not selected (unchecked)
  • Read limit (MB/s): unlimited
  • Write limit (MB/s): unlimited
  • Read limit (ops/s): unlimited
  • Write limit (ops/s): unlimited
  • Backup: not selected (unchecked)
  • Skip replication: not selected (unchecked)
  • Read max burst (MB): default
  • Write max burst (MB): default
  • Read max burst (ops): default
  • Write max burst (ops): default
Within the VM, fdisk -l provides the following output:
...
Disk /dev/vda: 19.54 TiB, 21474836480000 bytes, 41943040000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
...

And the relevant /etc/fstab entry in the guest (ubuntu server OS) is:
'/dev/vda /TANK ext4 defaults 0 0'

When I run 'sudo mount -a', I get the error:
mount: /TANK: wrong fs type, bad option, bad superblock on /dev/vda, missing codepage or helper program, or other error.

In case it's relevant, I don't use a ZIL, L2ARC etc. The pool was created with 'zpool create TANK raidz2 /dev/sda /dev/sdb ... /dev/sdj'

Help please.
 
Last edited:

Ramalama

Member
Dec 26, 2020
88
14
8
32
you don't use fstab, to import a zfs pool, do instead this:

systemctl enable zfs-import@XXXXXX
XXXXX = your zpool name, TANK or whatever.
then it should import your pool everytime at reboot. or you can do it manually with systemctl start/restart/status etc...

Cheers :)
 
  • Like
Reactions: piddy

Ramalama

Member
Dec 26, 2020
88
14
8
32
Oh, i missreaded, you don't have that service at all xD
Because you do it in some sort of vm, thats not proxmox xD

however, do this:

1. create a service:
/etc/systemd/system/zfs-import@.service
Code:
[Unit]
Description=Import ZFS pool %i
Documentation=man:zpool(8)
DefaultDependencies=no
Before=network.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -d /dev/disk/by-id -o cachefile=none %I

[Install]
WantedBy=multi-user.target

Check if /sbin/zpool exists...
or with
which zpool

then you do a
systemctl daemon-reload

and afterwards again the "systemctl enable zfs-import@XXXXXX" command...

If you don't have zpool, well thats another story.
Dunno why you want that pool in a VM, but try the Service first xD
 
Last edited:

piddy

New Member
Feb 13, 2021
7
1
3
41
Thanks for the speedy reply.

'Dunno why you want that pool in a VM' ... I'd like to run Plex in the VM; and so I'm trying to set up access to the files on the zpool. Whether or not the guest system sees the files as stored on a zpool is unimportant. Hope that clarifies the request.
 

Ramalama

Member
Dec 26, 2020
88
14
8
32
I do practically the same with plex, but a bit different.

My pools are all on the proxmox host.
And i have 3 unprivileged LXC container.
1. Ubuntu 20.04 lxc as Samba Server
1. Ubuntu 20.04 lxc as plex server
1. Ubuntu 20.04 lxc as jellyfin server

And then i mount simply the folder with the movies/series etc, that are stored in the zfs pool inside those 3 containers...

You can modify afterwards the user/group id of the jellyfin/plex/samba user that they all have the same userid/groupid, because the movies/series userrights in the zfs pool are anyway stored in id numbers....

And well. You will safe at least the emulation overhead. If you need to passthrough the gpu for decoding, that works the same way, you just "mount" the device into the container, instead of passthrough.

If you are interested in this, i can grab the usefull links together and write an mini howto.

But this isn't what you are asking for, it's just the way, "i would do it" :D

Did that service worked?

Cheers
 

piddy

New Member
Feb 13, 2021
7
1
3
41
I installed zfs-utils on the guest, and created that service on the guest, then rebooted. When I rebooted, nothing seemed different - 'zpool list' returned no pools. If it worked, where should I find the zpool TANK mounted?

I really only want plex media server, so using a container would be better. I'd really appreciate a mini howto. Thanks for offering.
 

Ramalama

Member
Dec 26, 2020
88
14
8
32
Zpool status
Is your friend. And if the zpool mounts it's usually in root, like
/TANK/....

I will make in the evening today a small howto
 

Ramalama

Member
Dec 26, 2020
88
14
8
32
Then it won't work.
You need a kernel that supports zfs etc...
Like any distro that has zfs build in, proxmox for example.... But proxmox comes with all that zfs overhead. I write you now a small howto
 

Ramalama

Member
Dec 26, 2020
88
14
8
32
Okay, first, you mount your pool on the proxmox host itself.
You don't need any services from my previous posts for that, because the proxmox host has the services already.

Preword:
My example zpool name: ZFSSharePool

Just import it and for automatic import at boot time later, you can activate the service simply with:
systemctl enable zfs-import@ZFSSharePool

Even if you just attached the disks to the proxmox host, you can mount it directly with
systemctl enable zfs-import@ZFSSharePool
systemctl start zfs-import@ZFSSharePool
But this must not work, it depends on your board/system if it is hot-pluggeable, if not, the system won't simply see the disks and you need a reboot after you attached the disks physically.

After the successfull import, your Storage is located at:
/ZFSSharePool

From here you can already change the Rights on that Storage, for example:
chown -R 250:250 /ZFSSharePool/
250:250 (user:group) is an example, you can do whatever you want, you can even use any existing proxmox user/group.
Just dont use "root" ! And i would recommend to use a number, higher as 200.

As next you create your container, lets say you make first the Plex container...
Create CT -> Remove the Checkmark near "Unpriviliged Container" -> The Rest, you can do as you want.
Typically 1GB Ram / 16GB HDD / 2 Cores, is Fine!
I would prefer Ubuntu 20.04 over Debian 10 container, because simply newer, but thats up to you.
After you created the container, i would additionaly enable "nesting=1" under options. (Gives it better performance)

After you created the container, you going to ssh into your proxmox host and edit:
/etc/pve/lxc/XXX.conf (your lxc container name)
And you need to add this line at the buttom:
Description: mp0: /Folder_in_the_Host,mp=/folder_inside_container/whatever_you_want
Example: mp0: /ZFSSharePool/Movies,mp=/STORAGE/Movies
You can share the whole zfs volume if you want, but i would prefer to share only folders inside, if you use that storage for other things too...
You can mount as many Volumes you want, mp0, mp1, mp2, mp3, .....

Then you boot up your LXC Container and install Plex.
After you install, you need to stop Plex: systemctl stop plexmediaserver
The important thing here is, that you going to change the UserID/GroupID of the Plex user.
Because the Service runs on that user.
1. id plex
Write down the uid and gid number, because you need it later.
Lets say for Example, my plex had uid=116 and gid=108, later you will understand.
2. usermod -u 250 plex && groupmod -g 250 plex
As you remember from above, i used 250:250, but if you choosed something as number, you understood now...
3. find /etc/ -user 116 -exec chown -h plex {} \; && find /etc/ -group 108 -exec chgrp -h plex {} \;
Here we fix Plex again from the old 116:108 to the new 250:250 userid/groupid.
But only /etc isn't enough, we need to do the same for /var and /lib
find /var/ -user 116 -exec chown -h plex {} \; && find /var/ -group 108 -exec chgrp -h plex {} \;
find /lib/ -user 116 -exec chown -h plex {} \; && find /lib/ -group 108 -exec chgrp -h plex {} \;

You are done now & you can start Plex again:
systemctl start plexmediaserver
And setup how you want, add your movies and so on.

Next container:
- If you want for example something else, like jellyfin or literally whatever, you do exactly the same as with Plex.

Samba Container:
- You create the container with the mountpoints like the plex container. Here you can mount the whole storage if you want.
- But for samba you don't need to change any rights at all, for samba you can create any user simply with the right id.
For example, normally i create a group for all samba users:
groupadd -g 250 sambagroup
then i make me as the privileged whatever user, (i have the 250 id)
adduser --system --no-create-home --uid 250 --group sambagroup ramalama
You can create more users if you need, like: adduser --system --no-create-home --group sambagroup nerywife
Then add them to samba (give them a samba password): smbpasswd -a ramalama if you need for the wife too xD

And you are done.

Here is a good samba config, just edit it as you need:
Code:
[global]
   workgroup = WORKGROUP
   log file = /var/log/samba/log.%m
   max log size = 1000
   logging = file
   panic action = /usr/share/samba/panic-action %d
   server role = standalone server
   obey pam restrictions = yes
   unix password sync = no
   passwd program = /usr/bin/passwd %u
   passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
   pam password change = yes
   map to guest = bad user
   server min protocol = SMB2_10
   client min protocol = SMB2

# Allow users who've been granted usershare privileges to create
# public shares, not just authenticated ones
   usershare allow guests = yes

# Printer
   printing = cups
   printcap name = cups
   rpc_server:spoolss = external
   rpc_daemon:spoolssd = fork
  
#======================= Share Definitions =======================

[printers]
   comment = All Printers
   browseable = no
   path = /var/spool/samba
   printable = yes
   guest ok = no
   read only = yes
   create mask = 0700

[print$]
   comment = Printer Drivers
   path = /STORAGE/print_drivers
   browseable = yes
   read only = yes
   guest ok = no
   write list = root

[Data]
   comment = Server Data
   path = /STORAGE/Data
   guest ok = no
   browseable = yes
   valid users = root, ramalama, nerywife
   write list = root, ramalama, nerywife
   create mask = 0775
   force create mode = 0775
   force user = ramalama
   force group = sambagroup
   include = /etc/samba/recycle.conf

[Papierkorb]
   comment = Deleted Files
   path = /STORAGE/Trashbin
   guest ok = no
   browseable = yes
   valid users = root, ramalama
   write list = root, ramalama
   create mask = 0775
   force create mode = 0775
   force user = ramalama
   force group = sambagroup

[Movies]
   comment = Movies
   path = /STORAGE/Movies
   guest ok = no
   valid users = root, ramalama
   write list = root, ramalama
   browseable = yes
   create mask = 0775
   force create mode = 0775
   force user = ramalama
   force group = sambagroup

If you don't use a cups server, you can remove all the printer parts etc...

Additionally, i would recommend to install wsdd on the samba server, that you can see it in your windows explorer.
https://github.com/christgau/wsdd

Cheers xD
 
  • Like
Reactions: piddy

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!