ecotechie

Member
Nov 11, 2023
34
0
6
Hi all,

I've been wrangling with how to setup storage on my two node cluster for a while. The main node has two 4tb SSD drives (currently btrfs raid1) that I can't pass through to a VM since they are on the same bus as one of the tow 1tb SSDs (btrfs raid1) used for Proxmox. Secondary node has a 1tb SSD for Proxmox and a 4tb SSD for storage (not really configured yet).

I'd like to have this system be as versatile as possible. Ideally I'll be able to have a shared (between nodes. VMs and LXCs) drive, currently thinking the 4tb SSD raid. The main node is something I plan on using as a media storage/server, thinking of leaning as much as possible on LXCs in order to share resources better. It will also eventually be opened to the internet as a web server, NextCloud, and other possible uses... That part I can deal with, but the initial storage configuration is not something I've been able to grasp.

The main node could have about six SATA drives, but trying to keep the expenditures low. The secondary node is tapped out in regards to drives.

Some thoughts I've had:

1. Setup ZFS over ISCSI on the main node. However I am confused as to how it really works. I thought I could do it all on the host, but it seems I need to set the drive up on a remote (virtualized?) machine?

2. Setup NFS on the host and share it with all the VMs, LXCs, and other node. This may be the way I'll do it... Though I think it may complicate things in regards to snapshots. Not sure where I'll store them then. Maybe the 1tb raid drive?

3. Open Media Vault LXC, I've seen how to do this and it should work. After some configuration... I'm trying to avoid setting up TrueNAS on a VM and have it eat up resources.

Currently the network is running at 1gb and I don't plan on upgrading to 10gb unless I see a need or untill the setup becomes more robust.

Any recomendations on how to setup a shared drive for files would be appreciated.
 
Last edited:
Thanks @Pifouney, I did have a look at both those solutions but only have two nodes. My understanding is that I would need at least three and ideally an odd number.
 
Thanks @Pifouney, I did have a look at both those solutions but only have two nodes. My understanding is that I would need at least three and ideally an odd number.
You are correct, for two nodes and such a small capacity, and 1Gbit network - your easiest solution would be a consumer grade external NAS storage.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
@bbgeek17, I'd like to avoid getting yet another thing at the moment. I think I should be able to make due with what I have considering the versatility of Proxmox and the currently available hardware/storage.
 
If you install NAS head on one node - you will have "shared" storage that is localized to that one node. Both nodes can access it, until node1 is down. Then you wont have any access.

You can create two NAS heads, whether truenas or similar, in a VM, and use it's synchronization technology to mirror the data between the nodes. That seems to conflict with your desire to minimize resource usage.

You can create a not-recommended Ceph config with 2 nodes, whether its worth the trouble - only you can decide.

There are many "hacks" you can implement. However, Proxmox is a hypervisor not a NAS solution, so I am not sure PVE versatility is going to help you to build a plane out of sticks.

You may need to define your exact requirements (shared, HA, recovery, access, resources, difficulty of implementation and troubleshooting, etc) and then remove those that conflict with your infrastructure limitations.

P.S. I hope those disks are in TB not in GB


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
For minimal resource usage - install NFS server on the "main" node of PVE directly, format the disks with LVM and your favorite filesystem (I'd probably avoid ZFS as the data stored will be qcow), permanently mount it and feed back to PVE as NFS mount .
You cant store snapshots outside of the primary backing filesystem.

Call it a day.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: ecotechie
For minimal resource usage - install NFS server on the "main" node of PVE directly, format the disks with LVM and your favorite filesystem (I'd probably avoid ZFS as the data stored will be qcow), permanently mount it and feed back to PVE as NFS mount .
You cant store snapshots outside of the primary backing filesystem.

Call it a day.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yeah, I think that may be the way I do it for now unless I hear of something I've missed or better options. And yes, I mean TB not GB (original post updated). :D
 
Okay, ended up setting up the NFS server on the main node. I have it shared at the datacenter level, and will start configuring containers and VMs soon. One question though... Regarding permissions and security. Should no_root_squash really be used? Is there a "better" way? What do/would you all do here?

This is the contents of my /etc/exports file:
/mnt/btr-vault/vault 192.168.0.0/16(rw,subtree_check,no_root_squash)

The mount is on a subdirectory, since I'm using that base mount for other Proxmox storage (templates, images, volumes, etc.). The idea is that I can sub-divide this partition into NFS shares that will serve different needs. Such as media only, files, etc. Though I guess I could just set this share to use them and mount it at /mnt/btr-valt/ (not as versatile?)

Any thoughts, advice? Thanks!
 
Last edited:
Should no_root_squash really be used? Is there a "better" way? What do/would you all do here?
There is always a better way. In high security enterprise environment nobody but super-admins should have unrestricted access to root. A rogue host on the network where root access is available, should not be able to mount an NFS export and get root access, which is what "no_root_squash" allows.
The NFS export should be IP restricted, all files should have appropriate permissions, no one should have root access.

Thats the theory, how it works in real life is often different. How it works in a home/closet setup and always different. With Proxmox you are also limited to working around "root" initiated mount, you cant change the client ID.
NFSv3 is very limited in its security options, You primary control with V3 is IP ACL. NFSv4 is more advanced.

How to configure and experiment with any of this is outside the scope of this forum. There are many NFS guides on the internet.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
There is always a better way. In high security enterprise environment nobody but super-admins should have unrestricted access to root. A rogue host on the network where root access is available, should not be able to mount an NFS export and get root access, which is what "no_root_squash" allows.
The NFS export should be IP restricted, all files should have appropriate permissions, no one should have root access.

Thats the theory, how it works in real life is often different. How it works in a home/closet setup and always different. With Proxmox you are also limited to working around "root" initiated mount, you cant change the client ID.
NFSv3 is very limited in its security options, You primary control with V3 is IP ACL. NFSv4 is more advanced.

How to configure and experiment with any of this is outside the scope of this forum. There are many NFS guides on the internet.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks for the information and help. I hear that you are not going to share more information on this subject, understandable. However, I'm guessing there may be others that have done just what I'm attempting on their homelabs and they'd be willing to share how they've done it.

Currently I have it working as stated above, but I'm trying (and reading), about best practices within the Proxmox limitations. Not that easy since:
1. Most people don't have Proxmox directly serving as NFS.
2. Are not using Proxmox, and suggest not using root as the user.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!