How to Setup a Cluster with additional hard drives on each node (best practice).

  • Thread starter Thread starter adavies01
  • Start date Start date
A

adavies01

Guest
Hi,

I'm new to Proxmox having moved over from ESX and like the product. I've done several searches but have not found the right answer(s). I'm hoping someone here can either point me to the right article (right search criteria) or be able to assist me directly. Here are the details and what I'm trying to do.

A cluster containing 4 nodes (master + 3) and possibly expanding the cluster.
Each node has the following hardware - primary hdd is a small 32GB SSD, two additional sata 500GB drives, 16GB RAM and a Quad-Core Intel chip (no raid).
Intention is to run 4 Linux server vm's on each node, two vm installed on each hdd or spanning across an extended vg (between the two 500 sata drives).

I've installed and promoted one to master and have installed two others and added them as slaves. On the master I was able to create disks and share them out from the two 500 sata's. However, I'm having a particularly difficult time adding hdds on the slaves, either from the master or directly on the node. I have created vg (vgcreate). However, when I try to add the vg from the node the hdd's are directly attached via the browser I'm told I do not have permission. From the master node, I am not able to see the vg to add it.

What am I after ---> What is the recommended way to setup a cluster like this? Is my approach sound or is there a better approach? Am I better off having stand alone nodes?

Thanks in advance,
adavies01
 
Hi,

I'm new to Proxmox having moved over from ESX and like the product. I've done several searches but have not found the right answer(s). I'm hoping someone here can either point me to the right article (right search criteria) or be able to assist me directly. Here are the details and what I'm trying to do.

A cluster containing 4 nodes (master + 3) and possibly expanding the cluster.
Each node has the following hardware - primary hdd is a small 32GB SSD, two additional sata 500GB drives, 16GB RAM and a Quad-Core Intel chip (no raid).
Intention is to run 4 Linux server vm's on each node, two vm installed on each hdd or spanning across an extended vg (between the two 500 sata drives).

I've installed and promoted one to master and have installed two others and added them as slaves. On the master I was able to create disks and share them out from the two 500 sata's. However, I'm having a particularly difficult time adding hdds on the slaves, either from the master or directly on the node. I have created vg (vgcreate). However, when I try to add the vg from the node the hdd's are directly attached via the browser I'm told I do not have permission. From the master node, I am not able to see the vg to add it.

What am I after ---> What is the recommended way to setup a cluster like this? Is my approach sound or is there a better approach? Am I better off having stand alone nodes?

Thanks in advance,
adavies01
Hi,
if you have an cluster you normaly want also shared storage. Shared storage is storage which is accessible from all nodes in the cluster - like SAN (FC or iSCSI), NFS or DRBD.
If you use DRBD (network raid 1) you can only use two nodes in the cluster because both (and also two) nodes use the drbd-devices as primary (use two different drbd-resources to avoid trouble). With trick you can use all 4 nodes as cluster and move only bewtween pairs - but this is not supported or recomended.
DRBD make fun if you use an fast connection between the hosts (for me isn't 1GB/s enough) and a fast io-subsystem (e.g. raidcontroller with fast disks).

If you VMs are OpenVZ-based, it's looks different. In this case only local storage is supported (normaly on /var/lib/vz). In this case makes an raid-controller doubled sense!

Udo
 
What am I after ---> What is the recommended way to setup a cluster like this? Is my approach sound or is there a better approach?

Simply use the same name for the VG on all nodes. Then, you only add one LVM storage on the master with that name.
 
Ah, OK, so on all the nodes, use vgextend vgname /dev/sdb1 (e.g.). First I take it that I need to setup the volume with fdisk /dev/sdb1 adding the type(t) as 8e (LVM)? Then extend the volume using the vgextend vgname /dev/sdb1? Does that sound about right?
 
Just tested on a node - this seems to work.

fdisk /dev/sdX - create new primary partition and set type to 8e.
use pvcreate /dev/sdX1
then vgextend pve(or whatever volume group name) /dev/sdX1
Go to the web interface and then you can add the volume within storage using an LVM and giving it a name.
 
Simply use the same name for the VG on all nodes. Then, you only add one LVM storage on the master with that name.

So, I've created a new master and node. I used fdisk /dev/sda setting it to LVM, then pvcreate /dev/sdb1. I think did vgcreate vmstorage /dev/sdb1. I did the same with /dev/sdbc, only I used vgextend /dev/sdc1. I then built and added a node and added the disks in the same fashion, first creating a vg and then extending it. From the master's website, I then created a new LVM naming it vmstorage. However, I'm not seeing this replicated or such to the node. What am I doing wrong?
 
This appears to work. I'm now testing if I can add nodes and if I should create the vg before adding it to the master or after.