WANTED - Proxmox / Ceph consultancy

leovinci81

New Member
Jun 7, 2017
17
0
1
49
Hi,

I'm urgently looking for some consultancy for the design of a 3 node cluster with Proxmox.

Main goal: low power / High availability
VM: 4 windows Virtual machines / 3 Linux machines (limited usage, no heavy users)
Current hardware: 3 X X10SDV-12C-TLN4F motherboard.
Disks: capacity +/- 8 TB over the entire 3 node cluster. Preferably a mix of SSD & HDD in order to keep costs low
Storage: Local Ceph environment
Ceph Backbone connection: 2 X Cisco WS-C2960X-48LPD-L available interconnected with Flex, 4 SFP+ ports available to build 10G ceph backbone.

I expect a couple of hours of consultancy to validate Bill of material & some meetings.
 
You can go for a Proxmox VE support agreement (e.g. "Basic" level), then you can discuss your setup with our support team in detail.

As an good alternative, you can just ask your questions here in the forum.
 
  • Like
Reactions: GadgetPig
You can go for a Proxmox VE support agreement (e.g. "Basic" level), then you can discuss your setup with our support team in detail.

As an good alternative, you can just ask your questions here in the forum.
Hi Tom,

Thanks for your feedback.
Are you part of this support team ? ;-))

A while ago you gave me some good recommendations on this forum already, but as I don't just want to exploit your brain, I was wondering if you could provide some private consultancy...?

Basically, I'm just looking for some confirmations...

* Can I use a NVME for each node? Run my proxmox VE on it + journal for CEPH?
* Can I then use regular WD Red 2 TB drives for my Ceph OSD ? ( 4 X 2 TB)
* Any previous experience with this kind of setup regarding performance for Windows virtual machines..? (how is it compared to Exsi ?)
* My 10Gig backbone, how crucial is this ? can regular 1 gig also be used ..?
 
Hi Tom,

Thanks for your feedback.
Are you part of this support team ? ;-))

A while ago you gave me some good recommendations on this forum already, but as I don't just want to exploit your brain, I was wondering if you could provide some private consultancy...?

A private consultancy is possible as soon as you go for the mentioned subscriptions. If not, we use the forum.

Basically, I'm just looking for some confirmations...
* Can I use a NVME for each node? Run my proxmox VE on it + journal for CEPH?

I do not recommend to install Proxmox VE and journal on the same NVMe.

* Can I then use regular WD Red 2 TB drives for my Ceph OSD ? ( 4 X 2 TB)

yes, but these are very slow drives and you get the performance you paid for.

* Any previous experience with this kind of setup regarding performance for Windows virtual machines..? (how is it compared to Exsi ?)

Windows runs great on Proxmox VE, so this question is not related to the storage?

* My 10Gig backbone, how crucial is this ? can regular 1 gig also be used ..?

you can use 1 gbit for Ceph, but it will be quite slow and the network is your bottleneck. basically, use fast enterprise class SSD and 10 gbit network and you will get a decent performance for a good price.
 
  • Like
Reactions: leovinci81
Thx,

I will go for subscription.
Is combination of ssd and hdd on OSD drives possible?

So I can dedicate a VM that needs high perf to ssd?

Thx

A private consultancy is possible as soon as you go for the mentioned subscriptions. If not, we use the forum.



I do not recommend to install Proxmox VE and journal on the same NVMe.



yes, but these are very slow drives and you get the performance you paid for.



Windows runs great on Proxmox VE, so this question is not related to the storage?



you can use 1 gbit for Ceph, but it will be quite slow and the network is your bottleneck. basically, use fast enterprise class SSD and 10 gbit network and you will get a decent performance for a good price.
 
Honestly, any type of modern network based storage will require something greater that 1Gbps. Even though CEPH/NFS/ISCSI works over 1Gbps, it works much better over >=10Gbps with any workloads greater than light usage.

If 10Gbps Ethernet gear is too expensive, consider Infiniband utilizing IPoIB. If you are not adamant on buying new gear, 40Gbps Infiniband can be had pretty inexpensively these days and even low end Infiniband gear easily pushes 20Gbps in real-world scenarios at a reasonable cost. Even new 56Gbps Infiniband gear is generally 50% less expensive than equivalent 10Gbps Ethernet.

I recently switched my NFS datastores and backup storage hosts from 1Gbps Ethernet to 40Gbps IPoIB Infiniband. While Infiniband is not as plug-and-play as Ethernet and takes a bit of knob turning, the bandwidth difference from 1Gbps is amazing and everything runs much better. 10Gbps Ethernet is definitely easier to setup initially, but once operational the speed of both media is equally impressive.

Just my $.02. YMMV.
 
Wishful thinking, unfortunately. IPOIB can only use one channel for a maximum of 10gbps. To get more you must use rdma. Still, the advice is sound as its completely feasible to use IPOIB and achieve 10gbe speeds.

Seems to work okay for me:
# iperf -c 192.168.168.100 -P4 -w8M
------------------------------------------------------------
Client connecting to 192.168.168.100, TCP port 5001
TCP window size: 8.00 MByte
------------------------------------------------------------
[ 6] local 192.168.168.98 port 64676 connected with 192.168.168.100 port 5001
[ 3] local 192.168.168.98 port 64670 connected with 192.168.168.100 port 5001
[ 4] local 192.168.168.98 port 64672 connected with 192.168.168.100 port 5001
[ 5] local 192.168.168.98 port 64674 connected with 192.168.168.100 port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 6.46 GBytes 5.55 Gbits/sec
[ 3] 0.0-10.0 sec 5.44 GBytes 4.67 Gbits/sec
[ 4] 0.0-10.0 sec 5.91 GBytes 5.08 Gbits/sec
[ 5] 0.0-10.0 sec 6.55 GBytes 5.63 Gbits/sec
[SUM] 0.0-10.0 sec 24.4 GBytes 20.9 Gbits/sec
 
I see what you mean :)

thats 4 connections, each reaching 5-6Gbits. not 1 connection reaching 20... I completely agree that that helps but your latency is still no faster (and likely slower) then a single 10gbit connection, so the benefits are marginal.
 
I see what you mean :)

thats 4 connections, each reaching 5-6Gbits. not 1 connection reaching 20... I completely agree that that helps but your latency is still no faster (and likely slower) then a single 10gbit connection, so the benefits are marginal.


Just for reference, here is 10Gbps Ethernet latency during iperf testing:

~$ ping 10.12.23.27
PING 10.12.23.27 (10.129.23.27) 56(84) bytes of data.
64 bytes from 10.12.23.27: icmp_seq=1 ttl=64 time=0.154 ms
64 bytes from 10.12.23.27: icmp_seq=2 ttl=64 time=0.146 ms
...
--- 10.12.23.27 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9123ms
rtt min/avg/max/mdev = 0.144/0.165/0.258/0.035 ms


Here's 40Gbps IPoIB latency during iperf testing:

~# ping 192.168.168.100
PING 192.168.168.100 (192.168.168.100) 56(84) bytes of data.
64 bytes from 192.168.168.100: icmp_seq=1 ttl=64 time=0.195 ms
64 bytes from 192.168.168.100: icmp_seq=2 ttl=64 time=0.184 ms
...
--- 192.168.168.100 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9217ms
rtt min/avg/max/mdev = 0.124/0.169/0.197/0.027 ms


As you can see, the numbers are just about equal, especially when running CEPH/NFS/ISCSI network storage. I wouldn't say 3x the bandwidth at 50% of the cost with similar latency numbers is a marginal benefit. I would say there are tangible alternatives to 10Gbps Ethernet depending upon your environment, budget, and skill-set. Ethernet is unquestionably easier to deploy. Infiniband is *usually* cheaper and *usually* faster in my experience. It all depends on your requirements.
 
I would say there are tangible alternatives to 10Gbps Ethernet depending upon your environment, budget, and skill-set. Ethernet is unquestionably easier to deploy. Infiniband is *usually* cheaper and *usually* faster in my experience. It all depends on your requirements.

no argument. just pointing out that you're still limited to the latency of a single 10gbps link.
 
Thx,

I will go for subscription.
Is combination of ssd and hdd on OSD drives possible?

So I can dedicate a VM that needs high perf to ssd?

Thx
Hi Tom,

Just FYI, just submitted a ticket & purchase for support, your insights/help will be highly appreciated.

Thx!
 
Hi Tom,

I submitted ticket UDS-962-52109 / Any chance you could give quick look, would be nice as you where able to consult me on previous forum threads ..?

Thx,

Thomas

A private consultancy is possible as soon as you go for the mentioned subscriptions. If not, we use the forum.



I do not recommend to install Proxmox VE and journal on the same NVMe.



yes, but these are very slow drives and you get the performance you paid for.



Windows runs great on Proxmox VE, so this question is not related to the storage?



you can use 1 gbit for Ceph, but it will be quite slow and the network is your bottleneck. basically, use fast enterprise class SSD and 10 gbit network and you will get a decent performance for a good price.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!