Proxmox To TrueNAS via iSCSI

abulhallaj

New Member
Jan 1, 2025
6
0
1
Hi, there
I'm coming for ESXi and vCenter so my question maybe dumb! this is my first promox setup.
Is there any best practice for Connecting Proxmox to TrueNAS via iSCSI?

if no let ask some questions and answer them:

- What's the best LUNing structure?
1- One Big LUN , ONE Big LVM
2- One Big LUN , Multiple LVM
3- Multiple LUNs , One Big LVM
4- Multiple LUNs , One LVM over every LUN

- What's the Best FS in TrueNAS for VM`s Thin partitions via Proxmox?
currently i am o scenario 4 but it did't support qemu hdd, an must use RAW so VM disk are Thick provisioned.
 
  • Like
Reactions: abulhallaj
Best would be zfs over iscsi, if u do not have a truenas cluster. Another way would be point 4.
 
  • Like
Reactions: abulhallaj
Hi @abulhallaj ,
The answer to your question depends on a few things:
a) the type of disks in your TrueNAS: hdd vs nvme
b) The structure of your backend (TrueNAS) pool
c) The intended use case in PVE
d) Home vs Business use
likely a few more I am forgetting now.

1- One Big LUN , ONE Big LVM
This would be most common and the simplest approach
2- One Big LUN , Multiple LVM
If your storage appliance runs on HDD or/and has them all in a single RAID of some sort, then it becomes inefficient to parallel-write data over the network to this backend pool with multiple queues.
*If by multiple LVM you mean many PVE storage pools - that also becomes harder to manage with growth.
3- Multiple LUNs , One Big LVM
If your storage appliance already has all the disks in RAID (which it very likely does) you just split it in pieces and glued it into one piece again.
4- Multiple LUNs , One LVM over every LUN
It depends on what you mean by "one LVM per LUN". If you meant PVE storage pool - again that's inefficient and hard to manage day to day.
If you meant LUN>VM Image (disk), this could be your desired approach given the constraints. You can but don't need to use LVM layer.
This option does require extensive manual management for daily disk administration tasks. It also would be the closest approach to ZFS/iSCSI scheme which is directly supported by PVE. If you manage to get it working it would be the "seamless" integration, well suitable for homelab.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: abulhallaj
I must correct my 2. Sentence. An other way is use multi LUNs with one VG, but multi lvs. Its your point 3, I think.
 
Last edited:
  • Like
Reactions: abulhallaj
Just posting to make sure you know this: https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_types

Utilizing iSCSI seems to be... difficult, as there are some restrictions, feature-wise.

Disclaimer: I do not use that.
I need a shared storage because of cluster flexibility and ability to migrate VM's between hosts


The answer to your question depends on a few things:
a) the type of disks in your TrueNAS: hdd vs nvme
b) The structure of your backend (TrueNAS) pool
c) The intended use case in PVE
d) Home vs Business use
likely a few more I am forgetting now.
my TrueNAS is a HP DL380 G9 with 28 disk bay
a) SAS 10K and SSD drives
b) RaidZ1 over SAS with spare and log/cache over SSD
c) VPS with automated provisioning and HA
d) Business use

Out of curiosity, why limit yourself to iscsi? filestore (NFS) is an option. its simpler to use and performs similarly under many circumstances.
I read somewhere that having VM's over NFS have performance issue. SO i decide to use ISCSI

and read somewhere that ZFS over iSCSI is hard to implement and resource intensive.

As I said in the first post, this is a productions environment that based on ESXi and easily every LUN connected to every ESXi severs as needed , formatted by VMFS and used till now. Now because of licensing we are going to migrate to Proxmox, so can't recreate the LUNning structure and my first approach was LVM over iSCSI.

recently I found this https://github.com/TheGrandWazoo/freenas-proxmox that seems reduce the pain of implement ZFS/iSCSI but still need some more research.
Anyway I'm here to hear your experience
 
Last edited:
RaidZ1 over SAS with spare and log/cache over SSD
Was that the configuration you had for your datastore when using Vsphere? this is the worst possible configuration both in terms of performance and fault tolerance.

c) VPS with automated provisioning and HA
If serious, you need a lot more legwork. If you don't tune your backing store to your iops requirements you're going to have a lot of unhappy customers.
 
Was that the configuration you had for your datastore when using Vsphere? this is the worst possible configuration both in terms of performance and fault tolerance.


If serious, you need a lot more legwork. If you don't tune your backing store to your iops requirements you're going to have a lot of unhappy customers.
Any advise? I will go to reconstruct the storage and connection between the proxmox and truenas
 
Any advise?
For starters:

Is there any best practice for Connecting Proxmox to TrueNAS via iSCSI?

This is backwards. dont pick the solution before defining the problem. to do that:

1. is all your workload of equal importance? for the important stuff, what are the consequences of an outage or storage loss?
2. for the important stuff, do you have a minimum accepted performance criteria? a value under which the business will experience adverse effects

depending on the answers, you can design a solution more conducive to success. Truenas is a fine product but is single headed, which makes it a liability; if you're talking about Truenas Enterprise, I would pose the question to their support since thats what you'd be paying for. In any case, understand that spinning disks (even 10k) are VERY POOR in terms of iops capability, and putting them in a single parity stripeset will only make that worse; this, by FAR, will impact your performance more then iscsi vs nfs. if you insist on using a legacy filer such as truenas, consider using all SSDs, and in a striped mirror set.
 
After read all your advises and some other stuff and do some test, i changed the configuration of pool,

here are my test result:
1738354850712.png
* pool result is come from dd command in truenas shell
* NFS and iSCSI come from dd command in proxmox shell
* servers connected via 10G links by Cisco Nexus 3064 (jumbo frame enabled)

so i decided to create a pool with this configuration:
3 * Raid-z2 vDev (each contain 6 disk)
2 * Log
2 * Cache
2 * Spare


But there problem is the result in proxmox is awful :(
 
Last edited: