Sheepdog Questions

Jerome Haynes

New Member
Jan 20, 2017
24
0
1
29
Hey guys,

A couple of questions regarding using sheepdog for storage:

1. Sheepdog requires 3x nodes. Do all three of those nodes need to be equally powerful or can just two be with the third functioning as a witness? (Locum)

For the above it just wasn't very clear in the documentation if this is required for this specific storage type.

2. Should I be using a hardware RAID card with Sheepdog?

3. What's your experience of performance with Sheepdog?

Kind Regards,
Jerome Haynes
 
Sheepdog is Shared Storage so, it does not concern with "equally hardware",
you can use Sheepdog over Raid card, it will work, I don't have experience with Sheepdog, I chose GlusterFS instead. (as Sheepdog is still not recommended from Proxmox for Production environment)
 
1. Sheepdog requires 3x nodes. Do all three of those nodes need to be equally powerful or can just two be with the third functioning as a witness?

The software will happily accept inhomogeneous hardware. The downside is: the slowest sheep limits the guaranteed speed for both IOPS and transfer speed.

2. Should I be using a hardware RAID card with Sheepdog?

No, not required. You can (and should) have more than one single disk per sheep. Redundancy is then build up between different servers.

3. What's your experience of performance with Sheepdog?

I did a short test just the other day with ProxMox 4.4 on
  • a NUC6I5SYH
  • an Acer DeskMini with Intel i5
  • an APU-2C4 with an AMD GX-412TC SOC (primarily as a third node for qurom)
My result:
  • Sheepdog felt faster than a two node mirroring GlusterFS on the first two machines which I tested beforehand. I have no comparison to Ceph though
  • Sheepdog got faster when I disabled the underpowered APU sheep
This test was done without optimization just to check the behavior for my private use case.

There are other problems with Sheepdog as documented in other threads in this forum. Proxmox states it is not stable to use. So I stepped back from actually using it...

(My "cluster" is completely turned off for now. Not for any technical reason.)
 
Sheepdog is Shared Storage so, it does not concern with "equally hardware",
you can use Sheepdog over Raid card, it will work, I don't have experience with Sheepdog, I chose GlusterFS instead. (as Sheepdog is still not recommended from Proxmox for Production environment)

How good is the performance with GlusterFS? To my understanding you use it as shared storage and the data partly sits on all the nodes? I've only ever used servers with Direct attached storage before hence my knowledge with shared storage is a bit limited. Would this require 10GBE connectivity? Does this 100% work with live migration?


The software will happily accept inhomogeneous hardware. The downside is: the slowest sheep limits the guaranteed speed for both IOPS and transfer speed.



No, not required. You can (and should) have more than one single disk per sheep. Redundancy is then build up between different servers.



I did a short test just the other day with ProxMox 4.4 on
  • a NUC6I5SYH
  • an Acer DeskMini with Intel i5
  • an APU-2C4 with an AMD GX-412TC SOC (primarily as a third node for qurom)
My result:
  • Sheepdog felt faster than a two node mirroring GlusterFS on the first two machines which I tested beforehand. I have no comparison to Ceph though
  • Sheepdog got faster when I disabled the underpowered APU sheep
This test was done without optimization just to check the behavior for my private use case.

There are other problems with Sheepdog as documented in other threads in this forum. Proxmox states it is not stable to use. So I stepped back from actually using it...

(My "cluster" is completely turned off for now. Not for any technical reason.)


Hi UdoB,

Thank you for your answer. It does sound like an option I can go down if I can't find anything else.

As a sort of idea of what I want/need to achieve:

I guess the question really I'm asking is what is suitable for my needs?

I have 3 nodes that I'll be upgrading all the hardware on. They will be keeping the SATA drives they have in them.

I'd actually only like to upgrade hardware on *2* of these nodes if possible but have no issue paying the extra costs if three is a requirement.

I want the nodes to have HA and live migration so in the event the VM goes down on one node it can come back on another.

If possible of using 3 nodes BUT having two of them high spec and the third acting as a "witness" this would be ideal.

This will be using 1GB Networking, but if needed I can upgrade to 10GB networking though this is a major cost jump between 3 nodes as it would require a 10GB switch.

Kind Regards,
Jerome Haynes
 
How good is the performance with GlusterFS? To my understanding you use it as shared storage and the data partly sits on all the nodes? I've only ever used servers with Direct attached storage before hence my knowledge with shared storage is a bit limited. Would this require 10GBE connectivity? Does this 100% work with live migration?

Kind Regards,
Jerome Haynes

The performance of GlusterFS is good, and it depends on what kind of service you will run on shared storage ? generally Databases are not recommenced on this kind of shared storage (except DRBD), 10GB connectivity is not required but you will see huge difference, specially in Live migration, I use 1GB Network since 10GB Network required huge investment, so yea Live migrations works fine 100% :)
 
  • Like
Reactions: Jerome Haynes
The performance of GlusterFS is good, and it depends on what kind of service you will run on shared storage ? generally Databases are not recommenced on this kind of shared storage (except DRBD), 10GB connectivity is not required but you will see huge difference, specially in Live migration, I use 1GB Network since 10GB Network required huge investment, so yea Live migrations works fine 100% :)

I'm really stuck on the fence with DRBD vs glusterfs. I can tell you that it will be a mixture of virtual machines but one of them is a Cpanel server?

Kind Regards,
Jerome Haynes
 
I'm really stuck on the fence with DRBD vs glusterfs. I can tell you that it will be a mixture of virtual machines but one of them is a Cpanel server?

Kind Regards,
Jerome Haynes
Cpanel means, you'll have LAMP? (I guess) so stick with DRBD9, its a good choice for HA. don't go for GlusterFS.
 
  • Like
Reactions: Jerome Haynes
Yeah it would be that. I'd be hosting other virtual machines though that do other tasks. Does DRBD have the option to only have certain virtual machines replicate? I don't need everything to be HA. And for any clients needing specific virtual machines obviously I'd want them to pay extra if they wanted that option.
 
Well, In that case, you don't have to use whole "Disk" for DRBD, use only specific partition to sync. and you can specify the partition size as you like. rest of the partitions you can utilize for other VMs.
 
  • Like
Reactions: Jerome Haynes
Now my only question is, are there any specific tutorials for DRBD9 - Proxmox 4 around as they seem to be taken off the wiki. Or do you just follow the DRBD install instructions and it interacts with proxmox via a plugin?

Kind Regards,
Jerome Haynes
 
Actually, its pretty straight forward, you just need to create shared storage, it doesn't matter the Host is Proxmox or any other DIstro.
you can use any DRBD9 guide to create shared storage.
I didn't use it for my VM's, since I don't have 10GB network, so I use it only for Data (MySql/Apache) so the impact will be minimum on Network.
 
  • Like
Reactions: Jerome Haynes

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!