Request Support to Proxmox team for GlusterFS

Seems they still not have snapshots :-(

Hi Dietmar and all

Pardon my ignorance, can anyone explain the importance of the Snapshot for GluserFS in easy terms?
I want to know why it is so important this feature.

Best regards
Cesar
 
Hi to all

Considering that I am in the context of this thread I think I can ask the following question:

In terms of networking, is it possible to apply the configuration shown in this diagram and get that works well?
(the goal is gain network speed and get HA in network for the communications with these storages)

Storages-PVEs-Bond0.JPG

Best regards
Cesar

Re-Edit: In theory I know that can not have "bond balance-rr" if i have in the middle a single switch unmanaged,
due to that any network packets can arrive inordinately due to delays generated by the same switch, and may be that with 2 switches will be worse or more risky
 
Last edited:
Hi Dietmar and all

Pardon my ignorance, can anyone explain the importance of the Snapshot for GluserFS in easy terms?
I want to know why it is so important this feature.

Best regards
Cesar

Well it's not specific to gluster, But all moderns storages have snapshots features, which is great to do tests,rollback,etc...
Also, snapshot are used to create base images,to do linked clones.
 
Hi to all

Considering that I am in the context of this thread I think I can ask the following question:

In terms of networking, is it possible to apply the configuration shown in this diagram and get that works well?
(the goal is gain network speed and get HA in network for the communications with these storages)

View attachment 1547

Best regards
Cesar

Yep, this topology is the way you'd want to go. If possible go for 10gb ethernet or infiniband on your storage backend.
 
Well it's not specific to gluster, But all moderns storages have snapshots features, which is great to do tests,rollback,etc...
Also, snapshot are used to create base images,to do linked clones.

Thanks Spirit for dispel my doubt.

And if I do not disturb you, could dispel another question that I have?
(this question is more important to me, has to do with PVE and the speed with Ceph or GlusterFS),
please see this link:
http://forum.proxmox.com/threads/11...-Proxmox-team-for-GlusterFS?p=77982#post77982

A hug
Cesar
 
Yep, this topology is the way you'd want to go. If possible go for 10gb ethernet or infiniband on your storage backend.

Thanks Kyc

I also think that 10gb ethernet or infiniband will be much better, but my question is based on ethernet due to in this country i can buy Intel 10gb ethernet.

Please let me to do a questions:

1- Do you have tested this scenery in production environments?, or
2- How do you know that will not mixed incorrectly network packets on the same bond?
Note- Because in theory I know that can not have "bond balance-rr" if i have in the middle a single switch unmanaged,
due to that any network packets can arrive inordinately due to delays generated by the same switch, and may be that with 2 switches will be worse or more risky

Best regards
Cesar
 
Last edited:
This is pretty much what I'm running in production.

In my environment everything is on 1gb ethernet, the storage servers (Gluster) have 8x 1gb bonded connections in LACP (server and switch need configuration), the Proxmox servers have 2x 1gb bonded in XOR and things run pretty well.

If I were to change anything I would put the storage network on 2x 10gb ethernet in active/passive. There's nothing really wrong with the 1gb network, it's fast enough for right now, but with future growth and from a cabling perspective the 10gb would have been a better plan. I'm not too concerned with my rollout as we have a 2-year lease program with our hardware providers, so I don't have to live with procurement mistakes for very long.

Edit:

Sorry, I didn't address your second point.

From what I recall, in balance-rr only your outgoing packets will be balanced and incoming packets are returned to the same interface they were sent from. I tested with balance-rr and got mixed results, but balance-xor was pretty positive. If you can I'd strongly suggest getting a switch that can do LACP, it's far easier to setup and you get expected load distribution as a result. The only reason I'm using balance-xor is I've run out of port-channels on my switch (only has 6).
 
Last edited:
This is pretty much what I'm running in production.

In my environment everything is on 1gb ethernet, the storage servers (Gluster) have 8x 1gb bonded connections in LACP (server and switch need configuration), the Proxmox servers have 2x 1gb bonded in XOR and things run pretty well.

If I were to change anything I would put the storage network on 2x 10gb ethernet in active/passive. There's nothing really wrong with the 1gb network, it's fast enough for right now, but with future growth and from a cabling perspective the 10gb would have been a better plan. I'm not too concerned with my rollout as we have a 2-year lease program with our hardware providers, so I don't have to live with procurement mistakes for very long.

Edit:

Sorry, I didn't address your second point.

From what I recall, in balance-rr only your outgoing packets will be balanced and incoming packets are returned to the same interface they were sent from. I tested with balance-rr and got mixed results, but balance-xor was pretty positive. If you can I'd strongly suggest getting a switch that can do LACP, it's far easier to setup and you get expected load distribution as a result. The only reason I'm using balance-xor is I've run out of port-channels on my switch (only has 6).

LACP Switch will be necessary,
i will study more about of this

Thanks Kyc !!! :D

- Only a question about of your strategy:
If you are using balance-xor, and the Gluster client need transmit to Gluster Servers for each write in an orderly, do not you hit your Gluster servers transmiting data twice?

Best regards
Cesar
 
LACP Switch will be necessary,
i will study more about of this

Thanks Kyc !!! :D

- Only a question about of your strategy:
If you are using balance-xor, and the Gluster client need transmit to Gluster Servers for each write in an orderly, do not you hit your Gluster servers transmiting data twice?

Best regards
Cesar

I'm pretty sure that in XOR the bonding module does MAC address matching, so that only one NIC is used when communicating to any specific destination MAC. When I did my testing with multiple streams I didn't get full transfer speeds with XOR, I got around 140MB/s, but I was topping out at 60MB/s using balance-rr and I was getting packet loss.

Regardless of what you're trying to do, on 1GB links you'll never exceed 120MB/s on a single transfer. Even with LACP you can't exceed the transfer speed of a single link with TCP traffic.

Your mileage may vary, I'm using Intel I350 quad port adapters for each server so you might get better results from another vendor. I'm also using the stock kernel modules that Proxmox provides, might get better results if I compile directly from Intel.
 
I'm pretty sure that in XOR the bonding module does MAC address matching, so that only one NIC is used when communicating to any specific destination MAC. When I did my testing with multiple streams I didn't get full transfer speeds with XOR, I got around 140MB/s, but I was topping out at 60MB/s using balance-rr and I was getting packet loss.

Regardless of what you're trying to do, on 1GB links you'll never exceed 120MB/s on a single transfer. Even with LACP you can't exceed the transfer speed of a single link with TCP traffic.

Your mileage may vary, I'm using Intel I350 quad port adapters for each server so you might get better results from another vendor. I'm also using the stock kernel modules that Proxmox provides, might get better results if I compile directly from Intel.

Thanks for sharing your experiences Kyc
After have readed my docs and hear you, i can see to do bonding is a big topic.

Best regards
Cesar

Re-Edit: Mr. Tom of PVE team tell me that PVE have the latest drivers versions of Intel NICs, and you can corroborate running by CLI "lshw" and compare it with the Intel portal Web, I did that after buying several Intel PRO/1000 Pt Dual Port Server Adapter, then I saw that it was right.
 
Last edited:
Re-Edit: Mr. Tom of PVE team tell me that PVE have the latest drivers versions of Intel NICs, and you can corroborate running by CLI "lshw" and compare it with the Intel portal Web, I did that after buying several Intel PRO/1000 Pt Dual Port Server Adapter, then I saw that it was right.

Good to know that my laziness hasn't cost me any speed ;)
 
Good to know that my laziness hasn't cost me any speed ;)

Hi Kyc

In the offical web of Gluster they tell that balance-alb is the best option
Please read this link:
http://www.gluster.org/community/documentation/index.php/Network_Bonding

Have you tested with this option?,
And please, if you will do any test of bonding in the future, tell me about of this

And if you want see others good practices, see this link:
http://www.gluster.org/community/documentation/index.php/HowTo

Best regards
Cesar
 
Last edited:
It is possible that ALB could be a better protocol to use for bonding, unfortunately I can't test this on my production servers.
 
Cesar,

Looking at our documentation we can't use ALB. Our servers shared NIC Port-1 with out IMM module. When using ALB we aren't able to access the IMM module at all, so we went with XOR instead.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!