Add physical storage to cluster

waryongc

New Member
Feb 18, 2025
3
0
1
Hi, new Korean newbie is here.

It's just a question, could I add physical storage as 1 node in cluster?
I think, as far as I know, it's impossible because clustering requires pve os.

It's a shame, but I've been working as a hardware engineer for less than a year.
I hope anyone tell me truth.

Thx.
 
Last edited:
Hi waryongc,

what kind of storage do you have?

You are right, if you want to to join a computer as a PVE cluster node, it will need to run PVE, as this functionality depends on PVE toolchain.

But clearly you can join your storage to your existing PVE-cluster, as long as it offers a supported protocol -- see the corresponding documentation to get an overview for what you can connect to a PVE-cluster [0].

Best regards,
Daniel

[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_storage
 
At first, thx to your reply.
So sorry to late

You mean
=========
Datacenter
ㄴ Proxmox1
ㄴ Proxmox2
ㄴ NewPhysicalStroage
=========
this possible?

if I buy a physical storage in list you provided.
Is it right?

Thx always
 
Can you be specific about what you mean by "physical storage"?
For example, you can't just add JBOD. But you can add NAS (i.e. NFS) to be visible by entire cluster.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Oh, I mean huge physical storage like a server, not a small disk or external hard drive.
If you say JBOD, do you mean something like DAS?

Thanks a lot your mention.
 
It's not about the server, it is about the protocol with which your PVE cluster will access the storage.
There are network protocols: NFS, CIFS, iSCSI, NVMe/TCP, etc
There are legacy, older, ways. I.e. direct SCSI, FC.

Storage in PVE is a pretty broad topic, perhaps the best way to start is to plug this "proxmox storage types" into Youtube search.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
JBOD normally means one physical disk, not in RAID even though/if it is attached to a RAID controller. One disk would not be shareable to the other nodes in the cluster.
JBOD literally disambiguates to "just a bunch of disks or just a bunch of drives"
A JBOD with multiple SAS ports can be connected to multiple hosts.

Once the disks are visible to multiple hosts, they can be assembled in LVM PV/VG - a standard PVE supported shared LVM architecture.

https://www.celestica.com/uploadedF...list/cls_datasheet_titan_G2_final3_040122.pdf


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I'm having a similar problem, my scenario is:
there are 3 hosts with HDs only for the OS in a cluster
1 HP Storage of 9TB SSD in ISCSI and I wanted the 9TB Datastore to be shared with the entire Cluster for use by VMs
 
I'm having a similar problem, my scenario is:
there are 3 hosts with HDs only for the OS in a cluster
1 HP Storage of 9TB SSD in ISCSI and I wanted the 9TB Datastore to be shared with the entire Cluster for use by VMs
Hi @luciano-shartech , welcome to the forum.

The OP of this thread, technically, did not have a problem. OP asked a theoretical question, which I hope has been answered satisfactory.

You described your environment, but, technically, did not describe any problems.
The environment you described sounds suitable for basic PVE iSCSI/LVM shared storage usage.

There are many guides, both textual and visual, have you tried them yet? https://www.youtube.com/watch?v=k9o2AHoC36k


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
quando eu adicionar o volume ISCIS e dar a opção LUN uma marca de seleção
1742416785836.png


aparar
1742416873241.png

No entanto, não consigo criar VMs nesses Volumes e, se criar um LVM em um dos hosts, ele funcionará, mas não estará acessível nos outros hosts.
 
I would recommend that you open your own thread, as your question has nothing to do with the current one.

That said, it appears that you are checking "Direct LUN" option, which is a special iSCSI use case whereby you pass-through the entire LUN to a particular VM. It is unlikely that this is what you actually want to do.

You also would need to clarify what "create an LVM" means in your particular case.

Here is an article that can help in your understanding of the layers involved: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
which is a special iSCSI use case whereby you pass-through the entire LUN to a particular VM. It is unlikely that this is what you actually want to do.
In my implementations of similar, I use this approach almost exclusively because It allows me to use the native snapshot facilities of the storage in a way that can be actually usable, since I can remap snapshots as targets and replace the target at the vm level. Not ideal orchastration but far preferrable to the alternative.

Also, I dont have objective benchmarks but I believe its better performing too.
 
In my implementations of similar, I use this approach almost exclusively because It allows me to use the native snapshot facilities of the storage in a way that can be actually usable, since I can remap snapshots as targets and replace the target at the vm level. Not ideal orchastration but far preferrable to the alternative.

Also, I dont have objective benchmarks but I believe its better performing too.
Certainly, there definitely are valid use-cases for direct-LUNs. My statement was based on my understanding of @luciano-shartech familiarity with PVE storage technologies, and the fact that direct-LUN is less "user-friendly" than the LVM option.
I would recommend trying both approaches!


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Either of @bbgeek17 's suggestions would result in storage accessible to the entire cluster. In your example above, you can only map the LUNs you created to a single vm at a time, but that vm would be accessible over any cluster member that has connectivity to your storage provider, so it would show up as an available disk to map on the vm creation screen; were you to add it as a store you'd need to add it to a volume group first (vgcreate)