which fs on shared storage

have you a source/documentation for this (just my curiosity), because my (short) search only has earthed this:

"VMware does not support snapshots of raw disks,"

from here: https://pubs.vmware.com/vsphere-65/...UID-53F65726-A23B-4CF0-A7D5-48E584B88613.html

Just to clarify. That's true but only if you attach iSCSI as raw partitions directly to the VM (e.g. for performance reasons).

If you have VMFS (Virtual Machine FS) Partitions over iSCSI you can create snapshots. The VMFS filesystem supports snapshots and concurrent access from all hosts. VMFS will work over normal iSCSI Targets.
 
iam using a NFS share from my synology (2 proxmox servers, 1 switch and synology NAS). Everything works fine, except snapshots. When i make a snapshot, the disks (vda1, etc into the vm) are corrupted and my vm doesnt start anymore. sometimes he gives an error like the screenshot , sometimes he gives no error after the snapshot, but the disks are damaged also!

What could i do to make this work?

because above i red about ZFS over ISCSI will not work with synology?

I haven't tried it yet with NFS but maybe you haven't set the right NFS options on the Synolgy part?
 
If you are trying to set this up, I would suggest

(a) either you have expertise to manage it, and therefore set it up.

Or, if not

(b) Go with something simpler to setup that you can manage more painlessly. ie, added complexity is only marginally better in certain cases, maybe, and even then I am not sure it is better. (ie, why iscsi / LVM is of interest? Maybe you had issues with snapshots of VM Backups if I was reading the thread correctly? )

Pure NFS shared stored targets with Proxmox - it is very easy and reliable option, supported by most NAS devices (Synology, Qnap, etc) with same-or-less effort as iSCSI target setup.

Performance will be as good or better than iSCSI or LVM / ZFS layers being involved. Management will be simpler. You can have the NFS datastore/target used by multiple proxmox hosts concurrently in a proxmox cluster.

Obviously you cannot have 'realtime concurrent' access to 'the same VM disk images' from multiple proxmox hosts concurrently; that would be a different sphere of operation entirely / is unrelated to proxmox shared storage back-end targets.

But with NFS storage pools, you can do HA cluster failover config if you want. (ie, if VM dies on ProxNode1 because that node crashes out; your VM can auto-restart on ProxNode2 for example).

So it would be the recommended path IMHO. ie, there is a lot to be said for "Simple, reliable, it works, no drama".

Just my 2 cents! :)

--Tim
 
either you have expertise to manage it, and therefore set it up.
I have no expertise and This is exactly the reason for my Question:
What do you think why I'am asking for help?
joblack said:
In the Wiki documentation it says

It is possible to use LVM on top of an iSCSI storage. That way you get a shared LVM storage.

So it should work?
Can someone Pls advice a Newbie how to set this up ?

1st Step = ?
 
Last edited:
Sorry, not to be too pointed, but .. for clarity. my point is that

if you are a newbie, you don't want a system that is too complex to manage yourself

so you would be better off building a solution you can manage

so for similar reason proxmox authors do not have an easy way to install proxmox on software raid. They basically say, "if you are good at software raid, you already know how to make this work. And if you don't know how to do software raid, then the last thing you want is production proxmox running on software raid. So it is a self-correcting problem to some extent ;-)

anyhow.. That being said.

NFS will give you a more painless experience, less hair pulling out and so forth. Hence me saying this. Again.

But if you really want to press forward with LVM-Thin on top of iSCSI.

I believe the path is hinted very clearly,

First looking at, Storage - Proxmox VE you see the links for more infos on
iSCSI >> Storage: iSCSI - Proxmox VE
LVM_Thin >> Storage: LVM Thin - Proxmox VE

your basic path will be, I think,

ensure you have an iscsi target/'export' setup on your device with the bulk block storage. CHAP auth or not, etc, discovery portal/IP/etc.
presumably for simplicity this will be one large blob of disk space you export as a large block from iSCSI "server" to your proxmox Node(s)

Then on proxmox nodes.
install the openiSCSI component, which is not present by default to save space, but hints to install are in the link above re: iSCSI proxmox setup

once you have iSCSI component, you now want to attach the iSCSI Block target. as per hints again above re: iscsi setup. note you are NOT doing
this to give 'a storage pool to proxmox host into which you will put VMs'. You are doing it to attach a (block device) to (linux) which happens to be iscsi based. If you do this step properly, proxmox won't know or care about you having an iscsi layer present. All you want this block device for is to give LVM_THIN a place to call home.

once you have the block device attached, you may now create an LVM_THIN on top of this block target. Which happens to be iSCSI but really LVM_Thin can care less if it is iSCSI, local SATA block, local SSD, whatever. Follow the hints in LVM_THIN Storage on proxmox to proceed with this step

finally, after doing this, you will have a new LVM_THIN storage pool. Which should be sized the same as the underlying iSCSI block, if you set up the LVM_THIN to use the entire iSCSI_Underlying_raw-block-device

and now you can get your LVM_thin features in proxmox (ie, snapshots, etc etc) and happy days ensue.

and you may repeat this same setup (precisely) if you have other nodes in a proxmox cluster, and then they will be able to use the shared storage and do fun things like live mgration. In theory. Or maybe not, if there is a bug or a glitch or a misconfiguration. In which case you lose everything in your test VM and you start over :)

As a result.... I would recommend not doing this on a production system where you have any significant concern or attachment to your VMs / proxmox host config. ie, do this with the expectation that it won't go perfectly smoothly until you break things a few times, so plan on destroy-and-redo at least once, that way it is less frustrating if you accidentally delete something. ie, it is not an accident if it is an expected highly probable outcome from the learning process.


Good luck!

Tim
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!