Seems they still not have snapshots :-(
Hi Dietmar and all
Pardon my ignorance, can anyone explain the importance of the Snapshot for GluserFS in easy terms?
I want to know why it is so important this feature.
Best regards
Cesar
Seems they still not have snapshots :-(
Hi Dietmar and all
Pardon my ignorance, can anyone explain the importance of the Snapshot for GluserFS in easy terms?
I want to know why it is so important this feature.
Best regards
Cesar
Hi to all
Considering that I am in the context of this thread I think I can ask the following question:
In terms of networking, is it possible to apply the configuration shown in this diagram and get that works well?
(the goal is gain network speed and get HA in network for the communications with these storages)
View attachment 1547
Best regards
Cesar
Well it's not specific to gluster, But all moderns storages have snapshots features, which is great to do tests,rollback,etc...
Also, snapshot are used to create base images,to do linked clones.
Yep, this topology is the way you'd want to go. If possible go for 10gb ethernet or infiniband on your storage backend.
This is pretty much what I'm running in production.
In my environment everything is on 1gb ethernet, the storage servers (Gluster) have 8x 1gb bonded connections in LACP (server and switch need configuration), the Proxmox servers have 2x 1gb bonded in XOR and things run pretty well.
If I were to change anything I would put the storage network on 2x 10gb ethernet in active/passive. There's nothing really wrong with the 1gb network, it's fast enough for right now, but with future growth and from a cabling perspective the 10gb would have been a better plan. I'm not too concerned with my rollout as we have a 2-year lease program with our hardware providers, so I don't have to live with procurement mistakes for very long.
Edit:
Sorry, I didn't address your second point.
From what I recall, in balance-rr only your outgoing packets will be balanced and incoming packets are returned to the same interface they were sent from. I tested with balance-rr and got mixed results, but balance-xor was pretty positive. If you can I'd strongly suggest getting a switch that can do LACP, it's far easier to setup and you get expected load distribution as a result. The only reason I'm using balance-xor is I've run out of port-channels on my switch (only has 6).
LACP Switch will be necessary,
i will study more about of this
Thanks Kyc !!!
- Only a question about of your strategy:
If you are using balance-xor, and the Gluster client need transmit to Gluster Servers for each write in an orderly, do not you hit your Gluster servers transmiting data twice?
Best regards
Cesar
I'm pretty sure that in XOR the bonding module does MAC address matching, so that only one NIC is used when communicating to any specific destination MAC. When I did my testing with multiple streams I didn't get full transfer speeds with XOR, I got around 140MB/s, but I was topping out at 60MB/s using balance-rr and I was getting packet loss.
Regardless of what you're trying to do, on 1GB links you'll never exceed 120MB/s on a single transfer. Even with LACP you can't exceed the transfer speed of a single link with TCP traffic.
Your mileage may vary, I'm using Intel I350 quad port adapters for each server so you might get better results from another vendor. I'm also using the stock kernel modules that Proxmox provides, might get better results if I compile directly from Intel.
Re-Edit: Mr. Tom of PVE team tell me that PVE have the latest drivers versions of Intel NICs, and you can corroborate running by CLI "lshw" and compare it with the Intel portal Web, I did that after buying several Intel PRO/1000 Pt Dual Port Server Adapter, then I saw that it was right.
Good to know that my laziness hasn't cost me any speed
Cesar,
Looking at our documentation we can't use ALB. Our servers shared NIC Port-1 with out IMM module. When using ALB we aren't able to access the IMM module at all, so we went with XOR instead.