Fibre Channel, shared storage, how?

Why bother with fcal at all then? the only reason to even know that FCOE exists is if you're trying to bridge your old FC stuff into the 21st century. if you're building a new cluster and you're using ethernet, avoid FC alltogether and use iscsi.

Isn't FC(oE) faster than iSCSI over 10GE? Also would have the advantage of of not requiring an "extra" software layer? This is will new to me and I'm just theorizing right now. I'm waiting for the hardware to be delivered.
 
Isn't FC(oE) faster than iSCSI over 10GE? Also would have the advantage of of not requiring an "extra" software layer? This is will new to me and I'm just theorizing right now. I'm waiting for the hardware to be delivered.

At least the FCoE I know needs some software in user space to configure the kernel stuff - sams as the iscsi stuff - so no difference at all.

Using multipath with iSCSI is very simple: Just use e.g. two or four cards and put different IPs (from the same net) on each card or bonding or two and use both IPs on the initiator as potal. Multipath works out of the box after this. Multipath should always use multiple physical links and switches. If you do not have it, you don't need it :-D
 
Isn't FC(oE) faster than iSCSI over 10GE?
Maybe, but not likely. on the flipside, iscsi is supported, developed, and maintained by everyone. FCOE... isnt. I'll take stability and supportability over "performance" any day (and I use performance in quotes because any performance differences are theoretical until you benchmarks for your application.)

Also would have the advantage of of not requiring an "extra" software layer?
Not sure what you mean by this. both methods are hardware assisted encapsulations.

Yes, I am planning to have HA
then you want ceph.
 
I was under the (probably wrong) impression that FCoE processing is offloaded to the actual card, thus the CPU / kernel (?) not doing much of extra work to transfer the data back & forward.

I'll keep that in mind.

I would LOVE to do Ceph, but the nodes we have ordered only have 3 drive bays each, so I can't get much in the way of storage, and we already ordered the drives.
 
Just a single SuperMicro server with 12 Bays + 2 internal for storage (I don't have the exact model on hand right now). Unfortunately the client didn't have enough budget for more so he chose to cheapen out on the storage part, against my advice... I explained the issue(s), so that's gonna be on him.

Planning on having:
  • 1 pool with 6 x 3 TB HDD (in a 2 vdev x raidz2 or mirror configuration)
  • 1 pool with 4 x 480GB SSDs (raidz1)
  • There will be 2 smaller SSDs (120 or 180GB) for the / of the OS, partitioned in about ~40GB for the actual OS and the rest will be partitioned separately for L2ARC for the HDD pool.
 
Oh, then so HA component but yes, you (or your customer) gets what he's paying for :-D
It's so sad that they always save money on the storage side, possible the worst thing you can do to performance.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!