Presenting storage volume as secondary drive to VM: I may be over thinking this.

HardyBiscuit

Member
Oct 10, 2022
16
2
8
I am new to Proxmox, but have a lot of experience with ESXI and most concepts carry over. That being said, I think I may be overthinking what I am trying to do here since there are so many great options.

I have internal storage setup on ZFS Raids which is where I house all of my primary VM disks. No problem here. I also have an NFS share presented to the host to store ISOs..also, no problem.

Now, I want to create storage that I can use to present a second (third, etc.) volume as a separate disk to the VMs. In ESXI, I usually stick to presenting iSCSI LUNS to the host then carving those out by adding vmdk's.

I want to do something similar here. I know I could do something similar at the VM level, by attaching whatever tech I choose directly to the VM OS, but I don't like that. I like having all of my storage visible and viewable at the hypervisor level.

From what I see, if I present the iSCSI (for instance) to the host, it is to use and format AT the host, not the VM level. Am I looking at this wrong or misunderstanding?

I would prefer block storage, and it is off-box, so on a qnap NAS (or variety of other options that can do multiple storage models), and I do not intend, nor need to share between VMs.

Again, I may be overthinking this, but even after reading the Wiki on the storage, I am still lost at what my options are. Thanks!
 
You are right on track.
I know I could do something similar at the VM level, by attaching whatever tech I choose directly to the VM OS
Correct, thats always an option for any NAS type system, including iSCSI. The benefits - one less layer of management for hypervisor, not exposing the data to hypervisor, with on-wire encryption - additional isolation from other guests. All of which may not be important for a home/lab setup.

if I present the iSCSI (for instance) to the host, it is to use and format AT the host, not the VM level.
You can utilize PVE iscsi storage to synchronize the raw storage attachment to the hypervisor with other PVE services that might be looking for storage. You would then have options to use either LVM or Directory type storage, depending on how you want to present your disks to VMs.
Keep in mind that in a PVE cluster, ie shared storage, the native iSCSI must be used in conjunction with thick LVM. Hence, it lacks snapshot support.
If you are not using cluster, or do not need shared storage you could use thin-LVM.
Other option is to use a filesystem, so you can store VM disks as files (qcow or raw). However, none of the natively supported PVE file systems are cluster aware.

So you really need to answer:
- Are you running PVE cluster and do you need shared storage between cluster members for VM HA?

If yes: LVM Thick is your only out of the box supported option.
If no: LVM Thin, LVM Thick, Filesystem (EXT, etc)


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: HardyBiscuit
So you really need to answer:
- Are you running PVE cluster and do you need shared storage between cluster members for VM HA?

If yes: LVM Thick is your only out of the box supported option.
If no: LVM Thin, LVM Thick, Filesystem (EXT, etc)


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Thanks for the detailed answer. So, I am not currently on a cluster, but plan on being (for HA primarily) sometime at the beginning of next year so definitely something to think about.

Also, posting this and reading through your answer made me realize that just because I have "always" done something a particular way in the past doesn't mean that is the best thing going forward. In fact, I only do it that way because, well, that is what some architect decided years before I started at my company and that's just how it is done.

Would I be able to pull in an NFS share into PVE, of disk image type and then use that (with the supported file types, of course) to mount as a disk image to the VMs? Is there any benefit to doing this over direct to VM NFS attachment? How would it affect snapshotting? Actually, just read that SS aren't available with NFS, which sorta makes sense. Oh, wait..went back again and realized that it is supported if I use qcow2, which I have no problem with doing. So I guess back to my question - is a hypervisor attached NFS with a qcow2 image file used as a disk image better than a VM attached NFS?
 
So I guess back to my question - is a hypervisor attached NFS with a qcow2 image file used as a disk image better than a VM attached NFS?
An in-VM attached NFS cannot be location of VM root disk, hence wont provide you HA VM movement. As a secondary data disk - its completely fine.

If you plan on using NFS as part of PVE HA story, then you need Hypervisor attached NFS - same as VMware. With qcow type disk located on NFS you will get snapshots.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!