Structuring shared storage

StanM

Member
Mar 5, 2021
22
0
6
78
My goal is to use shared storage for VMs, including Windows and Ubuntu machines. However, I'm new to storage concepts and haven't been able to make it work. I have a new server running Proxmox 7.4-3. (It is stand-alone at the moment but will eventually be joined to a cluster with two small servers.) I've installed TrueNAS Scale as a VM on the server and created a Zvol in TrueNAS. Storage in TrueNAS is configured as ZFS in raid Z1.

I've added the Zvol to the Proxmox server as iSCSI storage. Pursuant to a post in the forums, I've tried to add an LVM on top of the iSCSI storage, but that fails. I haven't added a LUN in the Zvol in TrueNAS pursuant to a post or video I saw some time back. My questions are: what is the best structure for making the TrueNAS block storage available for VMs in Proxmox? And how can I implement the structure? Should I add a LUN in TrueNAS or in Proxmox? Will VMs recognize a LUN, or do I need to add LVMs on top of the LUN? I prefer to use the GUI but am able to use CLI when needed.
 
Last edited:
So i understand that, you will have 1 PVE with storage and maybe 2 other ones without storage but all should use the storage from the first node?
 
I've added the Zvol to the Proxmox server as iSCSI storage.
what exactly does Proxmox see ? i.e. outputs of :
- lsblk
- pvesm status
- cat /etc/pve/storage.cfg

I've tried to add an LVM on top of the iSCSI storage, but that fails.
can you define what exactly you did and what exact error you received?
what is the best structure for making the TrueNAS block storage available for VMs in Proxmox
The whole setup appears to be a home lab. If you just want to set this up and forget it - use NFS. If your goal is to experiment and learn - there are many guides/videos/blogs that walk you through the setup.
Should I add a LUN in TrueNAS or in Proxmox?
connecting iSCSI to PVE is the action of presenting LUN to Proxmox. So you may have already done it?
Will VMs recognize a LUN
they could if you pass-through entire LUN to a VM. Probably not what you want to do.
do I need to add LVMs on top of the LUN
If you stick to iSCSI, yes thats the easiest way to do it.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
floh8: The first two nodes have been running for a long time with their own storage. My intent is to connect the new node to a cluster with the other two and make available the shared storage to all three nodes. But not until I get the new node set up to my satisfaction.

bbgeek17: Here are the outputs. First for lsblk:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part
└─sda3 8:3 0 930.5G 0 part
sdb 8:16 0 931.5G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 1G 0 part
└─sdb3 8:19 0 930.5G 0 part
sdc 8:32 0 400G 0 disk
sdd 8:48 0 400G 0 disk
zd0 230:0 0 32G 0 disk
├─zd0p1 230:1 0 1M 0 part
├─zd0p2 230:2 0 512M 0 part
└─zd0p3 230:3 0 31.5G 0 part

sda and sdb are my Proxmox SATA ssd's which are the boot drives and basic storage for the node. sdc and sdd are the size of the Zvol I've connected to Proxmox. I don't know why there are two of them. Maybe that's a problem. I don't know what zd0 is. I've set the two SATA ssd's in a ZFS mirrored configuration. Maybe that's the underpinning of the mirrored ZFS.

For pvesm status:

Name Type Status Total Used Available %
ISO_NAS nfs active 468008960 359017472 108450816 76.71%
SJMstorage iscsi active 0 0 0 0.00%
local dir active 931601792 1870976 929730816 0.20%
local-zfs zfspool active 941037540 11306680 929730860 1.20%

The SJM storage is the Zvol that I've connected to Proxmox.

For cat /etc/pve/storage.cfg

dir: local
path /var/lib/vz
content iso,backup,vztmpl

zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1

nfs: ISO_NAS
export /VMs
path /mnt/pve/ISO_NAS
server 192.168.10.4
content iso
prune-backups keep-all=1

iscsi: SJMstorage
portal 192.168.10.42
target iqn.sjm.storage:sjm.storage
content none

In Datacenter, when I try to add LVM to the iscsi storage (shown as Base volume CH 00 ID 0 LUN 0), I get this error message:

create storage failed: pvcreate '/dev/disk/by-id/scsi-36589cfc000000f063bf533009d6d1636' error: Cannot use device /dev/sdc with duplicates. (500)

This suggests that the sdc and sdd duplication may be the problem. I've removed the storage via the Web interface, but sdc and sdd still show up with lsblk. I'll try to figure out how to remove those, but if you have a quick answer, it would be welcome.
 
bbgeek17: Thanks for the reference to iSCSI multipath, but I'm not sure that I need that. I assume that once I succeed in establishing the iSCSI block in Proxmox, it wil be shared with the other nodes when I cluster them.
In the meantime, I've deleted iscsi storage from the node. It doesn't show up under storage.cfg now, but sdc and sdd mount points still show up in lsblk. I assume that if I try to attach the TrueNAS Zvol again, I'll create another mount point sde and the attempt to build a LUN on top will fail for the same reason. I've been groping around to find out how to remove sdc and sdd, but haven't found where they are. (And if I find them, I'm uncertain about whether to just Nano in and delete the references.) Can you suggest how to remove sdc and sdd? I'm obviously not very fluent in Linux.
 
Thanks for the reference to iSCSI multipath, but I'm not sure that I need that. I assume that once I succeed in establishing the iSCSI block in Proxmox, it wil be shared with the other nodes when I cluster them.
You established an iSCSI connection between Truenas and PVE successfully. Your output indicates that you, in fact, had at least two viable network paths between SAN and PVE, so you inadvertently established multiple paths. The error you received when trying to place LVM told you that you have multiple paths and you should do something about it.
Implementing multipath is the right solution in general.

Multipath has nothing to do with storage being shared. For that other nodes need to have access to the storage via same method and you need to tick "shared" attribute in PVE config. If other nodes will have multiple paths to the SAN - you will need to configure multipath there too.

A reboot of your host should take care of removing non-persistent iSCSI devices. You can also "iscsiadm -m node --logout" and "iscsiadm -m node --delete".


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'm obviously not very fluent in Linux.
thats not good when u want to build such an environment. I suggest u not use the truenas soluion, because of its overhead. If u need a gui then i would suggest to use ovm with extra-plugin for zfs support. I think more simple is to use not one of them. U can create a zfs mirror with the Proxmox gui. The rest of the configuration u must do with the console. u create a new zvol and export it via nfs. So u have to inform yourself about zfs and nfs.

the other thing is that u use 2x 1TB disk for your proxmox installation and 2 x 400GB for VMs. It makes more sense to exchange that.
 
Last edited:
bbgeek17: I can't get past one final obstacle. Maybe you can help. In Proxmox, I've added an LVM on top of the LUN that's imported from TrueNAS. I'm trying to install a Windows VM in that LVM. I get as far in the Windows installation as the Windows installer finding the LVM as a drive, but after it begins to install Windows on it, it gives me error codes. Online searches of the codes generally advise deleting some drives (although I only have this one appearing) and other advice that doesn't seem applicable. There is no option to format the drive.

I read that VMs using Proxmox, including Windows, are installed on NAS iSCSI shares, so there must be a way. I haven't found a way to format a LVN with NTSF or something else that Windows would use, but I haven't spent much time on that yet. I would have thought that the Windows installer could format it. Maybe it doesn't recognize the drive as bootable, but I'd have thought the installer could add a boot partition.

I suspect that this is a conceptual issue that I just don't understand. If you have any thoughts, I'd value them.
 
I suspect that this is a conceptual issue that I just don't understand. If you have any thoughts, I'd value them.
my suspicion - bad NAS or PVE (as iSCSI client) performance. Either due to being underpowered or network issues (ie retransmits, etc).

You may want to look at https://kb.blockbridge.com/technote/proxmox-tuning-low-latency-storage/ .
Not everything would be directly applicable, but many things will.

Check PVE logs for IO errors and test baseline performance of your iSCSI.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!