Sorry....but I had no time to check and dig deeper the problem with mysql....and restore from backup ((((((((
I'll try to make more tests tomorrow on the test Cluster.
Ok, people. Let's live in peace. There is nobody "clever than anyone else in the universe"....we simply have some troubles with function of moving disk.
And I try to understand is it safety use it.
Anybody knows how it work with cache (raid, disk, host-system, VM)? And why it is only with mysql...
In progress.....on "hot" cluster don't want make tests, so installing another cluster for tests.
Yes
one by one......non-multiple movement
Yes to all.....also there are hardware controllers with BBU and cache
No errors on network.....
It's strange that all other VM's without MySQL moved...
May I join you conversation.....
We have PVE cluster with 3 nodes (All of them 4.3-3), and use NFS Share based on FreeNAS.
Not so far we opened to ourselves "move disks".......It was very cool....and gives us opportunity migrate VM from one storage to another without "backup/restore".
For a...
Hi! I think that I have the same bug.....
Will fix need rebooting or re-installing the node?
I have 3 nodes....2 of them were installed from 4.0 iso..and updated to 4.3-3..and there is lsblk:
root@pve3:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 820.2G 0 disk
├─sda1 8:1 0...
Well. I've done tests myself.
a) Fail of hdd (in my situation it was raid0 of 2*300Gb sas 15K drives) on pve5 and vm id 550
On working node:
1. drbdmanage unassign vm-550-disk-1 pve5
2. drbdmanage delete-node -f pve5
On fail node:
1. replace fail drives
2. parted /dev/sda mktable gpt
3. parted...
I have seen this warning...BUT!....it works without drbdmanage.cfg)))))) I don't know how?!?!
It was some troubles with changing name of lvs. Try default schema.
Well.
I have success with:
1. install version 0.97-1
2. init (already 3 nodes) all nodes
3. add-node from master.
4. update to the latest version
root@pve4:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 279.4G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 127M 0 part...
Hi.
It looks like I found 2 ways to solve this problem:
1. Install drbdmanager another version:
apt-get install drbdmanage=0.97-1 -y
and continue man
2. Install the latest version and do init on BOTH nodes. (drbdmanage init -q IP)
and after that add-node with overwriting control volumes.
Hi!
Sorry for my English.
First of all, I understand that DRBD is only tech preview, but it's so interesting
We have task to build 3-nodes cluster with full-redundancy of data.
Now I'm testing 2-nodes cluster and have some troubles.
Can anybody print step-by-step manual what to do if:
1...
Hi.
Some monthes age I installed PVE4.2 with DRBD9 without any problems using wiki.
But today I have a problem.
After drbdmanage init -q I recieved such lay out:
root@pve4:~# drbdmanage init -q 100.100.0.4
WARNING:root:Could not read configuration file '/etc/drbdmanaged.cfg'
Empty drbdmanage...
Hello.
And what to do if PVE ddid not see the adapter?
I tryed to install driver manualy, but got an error:
make
Makefile:20: *** Aborting the build: Linux kernel /lib/modules/2.6.32-32-pve/build source not found. Stop.
UPD. Problem solved.
1. changed repository to free one.
2. install...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.