FC SAN Storage & Proxmox

ctsde_markus

Member
Nov 22, 2019
9
3
8
Bavaria
Hi!

We intend to evaluate Proxmox in a company where there is already a V3700 FC SAN storage as shared storage for another virtualisation cluster.

I wonder which shared filesystem i'd choose for a LUN on the storage to have all the nifty features like snapshots and incremental backups with the proxmox backup server. As of https://pve.proxmox.com/wiki/Storage it would be ZFS over iSCSI (correct? as FC merely is iSCSI with less overhead). Is it production-ready in 6.4 (i'd prefer to start from there at the moment).

Anyone who has something similar in production here?

Thanks!
Markus
 
Hi Markus, you've raised a few questions - I'll answer them at a high level, for details please refer to another thread from a few days ago:

- Incremental backup functionality of the Proxmox Backup server is completely independent of the file system choice in the Virtual Environment.
- Snapshots in PVE can be implemented in two ways : storage controlled (Ceph, LVM, Blockbridge), Qemu controlled with qcow image format.
- You cannot use ZFS/iSCSI with your IBM SAN. ZFS/iSCSI is a very specific storage implementation that is only applicable to situation where you run a full Linux/FreeBSD OS on your storage "appliance". It must allow SSH, it must be running ZFS tools set that you can control, it must be running a very specific iSCSI daemon that you can control. None of it applies to your SAN.
- The right path for you is to setup and use a clustered filesystem outside of Proxmox control. You can then utilize QCOW image format for most of the benefits. It may have performance implications for you.

For more detailed information please take a look here:
https://forum.proxmox.com/threads/fc-san-with-proxmox-cluster.96372/

edited: EMC>IBM

Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: ctsde_markus
@bbgeek17 thanks for your competent answer!

A collegue pointed me to the other thread too (i am probably worst when it comes to searches on the internet). That's very unfortunate, as i don't want to lean too far out of the window (how we say here, don't know if it is correct to just translate it to english).

From all i learnt so far currently it would be best to have:

* Ceph cluster with at least 3, better 5 nodes
* Proxmox cluster with 4 nodes using the Ceph cluster as storage

OR

* Proxmox cluster with 4 or 5 nodes all serving as Ceph cluster simultaneosly

Then i MAY be somewhat limited by the 10 Gbit network that is necessary between the nodes, but having more overhead as the FC protcol.

So i guess we will stick with the already implemented solution (from the most prominent proxmox competion) this time, keeping the current storages and first upgrade the computational workforce for now.

Your insights are highly appreciated anyways! Thanks again!

Bye
Markus
 
You are welcome Markus!

Another option is to take your existing SAN, create one or two large LUNs and attach them to two FC or SAS connected hosts that are running Software Defined Storage product, such as Blockbridge (https://www.blockbridge.com/architectures/)

Since we provide the Proxmox storage plugin that integrates with all of the Proxmox storage functionality (snaps, clones, volume creation, etc) - we could front-end your SAN and let Proxmox flexibility carve up the one large pool from the SAN.

You would be able to use your existing hardware investment in the SAN and you would only need 2 physical servers for this setup as opposed to 3-5 with Ceph.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox