Shared Storage with FC-SAN

How do you write a qcow format directly to raw disk and keep the QCOW addressing?
I should have been clearer. you would have a filesystem (ext4, probably.) The challenge would be to retain fs jourmal on migration- a process would be needed to force umount/remount on the target node on migration before bringing up the vm; chances are this can't be done live.

This has to be carefully fenced.
the presumption here is that only pve hosts have access to the storage; only the host actively accessing the vdisk would normally attempt to actively use it; the rest natually ignore it. Proper lun mapping/masking needs to be followed just as with any other environment; I dont *think* you'd need any further controls.

The alternative to this would be to write your own snapshotting mechanism, assuming the back end store has an api, similar to how pve does zfs over iscsi- and then submit it to inclusion into pve ;)
 
I should have been clearer. you would have a filesystem (ext4, probably.) The challenge would be to retain fs jourmal on migration- a process would be needed to force umount/remount on the target node on migration before bringing up the vm; chances are this can't be done live.
Why try to fit a square peg into a round hole? If a filesystem layer is introduced, might as well use something that was designed for this use case, i.e. OCFS.
The alternative to this would be to write your own snapshotting mechanism, assuming the back end store has an api, similar to how pve does zfs over iscsi- and then submit it to inclusion into pve
Yep, inevitably, someone needs to do the work. We did it for Blockbridge. But we designed our storage to be API-driven when it was still on a napkin. :)


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
and then submit it to inclusion into pve
You can have your own custom storage plugins. Documentation is currently a bit lacking (something we will hopefully improve soon). But taking a look at the code of existing storage plugins can get you started.
 
This has to be carefully fenced. In standard iSCSI/FC setups the underlying raw disk is attached to all servers in the cluster simultaneously. Keeping LVM fenced off is hard enough, now you also have open file locks above it that need to be released.
imho the Proxmox cluster (FS) does already the fencing of a guest LV / raw disk (from a shared VG), so that only >one< PVE host can access it at a given time , or ? should then be VM FS agnostic
I see also no difference in "formatting " a LV with raw or qcow within this setup - beside the additional features of qcow
 
imho the Proxmox cluster (FS) does already the fencing of a guest LV / raw disk (from a shared VG), so that only >one< PVE host can access it at a given time , or ? should then be VM FS agnostic
The issue is pretty simple; a raw device is consistent across initiators; a file system is not.

I see also no difference in "formatting " a LV with raw or qcow within this setup - beside the additional features of qcow
not sure what you're saying here. you see no difference besides the additional features? isnt that a "difference?"

raw can be written at the block level. qcow cannot.
 
how do you expand the disk? it kinda defeats the purpose of thin provisioning; @bbgeek17 maybe you know?
I gave this a quick try and very quickly ran into IO write/read errors on a perfectly good disk. I suspect there are quite a few edge cases here. There are some discussions on the forum that as the number of NBD devices scales up, things get dicey on memory management and processes get OOM'ed.

As usual, it doesn't just need to work with one device - it needs to work under load at scale. The subset of customers who are running SAN is usually in a business segment and expects a very high degree of stability and reliability.

Good luck

@alexskysilk to answer your question: since we don't need any of this goop - I don't know without spending time testing it.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I gave this a quick try and very quickly ran into IO write/read errors on a perfectly good disk. I suspect there are quite a few edge cases here. There are some discussions on the forum that as the number of NBD devices scales up, things get dicey on memory management and processes get OOM'ed.
would never use this with nbd devices - this was only an offline example to get quick access to the qcow disk - in a production environment it should only be used with kvm/qemu/proxmox pve and corresponding Storage Stack QCOW2 -> LVs -> iSCSI/FC -> SAN LUN
 
imho the Proxmox cluster (FS) does already the fencing of a guest LV / raw disk (from a shared VG), so that only >one< PVE host can access it at a given time , or ? should then be VM FS agnostic
Most of the time yes, but not always: a disk needs to be read from two nodes in case of a switch over / live migration. The destination has to open the disk before it is closed on the source. Reading is normally no problem


then I would like to know what this is ? a file, a special file or a filesystem ?
First, in unix(-like) system, everything is a file. QCOW2 is here another layer of top of the logical layer LVM, which does similar things and represents another block device to applications able to read it. The file can be concurrently read by multiple processes, yet not concurrently written without restrictions imposed by the format itself.
 
so if I use lvm,then i convert it to qcow2.How can I use the lvm like before.(the disk was raw)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!