I am using unprivi CTs for the first time.
Build: prox 5 ve
2x Dell R720 with 6xSAS drives each in hardware RAID10 running default local thin LVM storage. Dual e5-2640 CPUs, 64GB ram etc.
Findings:
Downloaded the centos 6 template (v20161207) via the prox ve. Did a yum update. Everything works...
Thanks Udo. I will read the wiki regarding lvm.conf. I followed this howto initially and learned from there:
https://www.youtube.com/watch?v=OjORUwDY63U&list=PLNSYULi38glvExordFRiEX9a6B09872sm&index=1
Thanks for the help.
Hi Udo
This is the first I hear of lvm.conf. I am not aware that I have to configure anything in that file, should I? Do you have an example perhaps?
root@jt1:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 558.25g 16.00g
/dev/sdb1 lvm2 a-- 1.09t 1.09t
Hi All
Sorry to bother you again gents, but I have an issue and I have been looking on forums for the last 3 days to try and resolve. I had a power failure and all my servers went down on at the same time. I ended up with a drbd setup like this:
root@jt1:~# /etc/init.d/drbd status
drbd driver...
Thanks Dietmar. I tried what you said before hand and but I got error files in use. After I halted the 3 other nodes I could do it. Output:
root@jt1:~# pmxcfs -l
[main] notice: unable to aquire pmxcfs lock - trying again
[main] crit: unable to aquire pmxcfs lock: Resource temporarily...
Thanks Dietmar. I tried that already:
root@jt3:~# pvecm expected 1
cman_tool: Cannot open connection to cman, is it running ?
root@jt3:~# service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup...
So I configured fencing for the first time last night. I configured cluster.conf wrongly, which I only found out when we had a power failure.
So the servers came back online with this broken file and now the cluster wont start because cman has an issue with the file.
I cant fix the file...
Sorry its been a very busy day and I forgot to post that. Here you go:
Set 1: jt1 and jt2 sharing drbdr0:
root@jt1:~# drbd-overview
0:r0 Connected Primary/Primary UpToDate/UpToDate C r----- lvm-pv: drbdr0 1116.71g 0g
root@jt1:~#
Set 2: jt3 and jt4 sharing drbdr1:
root@jt3:~# drbd-overview...
By the same name I mean when you select the drbd to make the lvm in the gui they both came up with the same name eg drbdr0. So then I gave set1 the lvm name drbdr0 on drbd drbdr0 and I gave set2 the lvm name drbdr1 on drbd drbdr0 (this should be drbdr1).
After creation in the gui under the...
Thanks, already checked those. Even deleted all VG and DRBD stuff, also in prox gui, and re-added everything but yet the same. I think it is a bug somewhere. Either way both my DRBDs are working, they are just both called drbdr0 lol.
So I now have my cluster setup and it is working well. I...
Small issue. I setup drdb on the 4 dells. 2x2 setup with drbdr0 on set1 and drbdr1 on set2. I made a typo with this command on set2:
# vgcreate drbdr0 /dev/drbd1
should have been
# vgcreate drbdr1 /dev/drbd1
I deleted the vg and added it again so vgscan is correct:
Reading all physical volumes...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.