Thank you very much for your reply Wasim.
I have some more newbie question about this procedure. I did not realize I have to make such a "big" changes to add new nodes and pool when I put this storage to production, so I would like to be sure if this way is correct.
1) I will add this line...
Hi,
now I have a running ceph three-node cluster with two ssd storage nodes and one monitor.
What I would like to achieve is adding another two storage nodes with spinning drives and create a new pool and keep these two pools separated.
I suppose I need to edit crushmap something like here...
I experienced more errors on one node and finally I decided to restart all the pve* services including rgmanager on that node. After that it seems to be solved by restarting everything:)
NDK73: I understand the concept of bbu, but I have no idea why it is useful only for some storages as Spirit has written.
I have problem booting my migrated vms without setting cache to writeback on the glusterfs, as I found this solution somewhere. When I create a new vm on top of gluster...
Spirit: Why can be writeback useful only for some storages? So without BBU it is not recommended? I can run my vms on gluster only when using writeback, without this, my vm could not start due to IO errors.
Hi,
I am experiencing another problem with gui. I run unregular backup of one of my vm during the night and it shows task ok. But the task seems still active in the tasks part.
Moreover my cluster seems to be disconnected via GUI. Another servers are red and all vm are shown just like a...
You can try install Debian first and then install pve - https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy
I tried 3.3. month ago and it was working.
Re: Can there be such configuration: 6 machines in the one cluster, but the HA uses o
Yes, you can. Search for failoverdomains - I think this is what you are looking for. Some description can be found here...
I used local ZFS shared via nfs for years, the performance reduction was just a few percent in comparison to iscsi. But these days I am moving vms to ceph and gluster on zfs because of high availibility.
Thanks a lot for your answer!
So in my case if I want to have a ceph nodes just for worst case scenario, I would set just one failover domain like this?
<code>
<failoverdomain name="A-B" restricted="1" ordered="1" nofailback="1">
<failoverdomainnode name="A" priority="1"/>...
Hi,
I want to achieve running HA VM just on selected nodes and not on ceph nodes, or just have these nodes as a "disaster" backup.
I found on this page http://pve.proxmox.com/wiki/Ceph_Server following sentence:
If you do not want to run virtual machines and Ceph on the same host, you can just...
Ok, I restarted that node where the ha vm was actually running and the vm was not started again. In syslog I can see:
TASK ERROR: command 'clusvcadm -e pvevm:154 -m cl1' failed: exit code 254
It was solved by running
clusvcadm -d pvevm:154
And now it is possible to start that vm and migrate as...
Hi,
I have configured three node cluster with ipmi fencing. I did some tests and I experienced some problems and I did not found answers to my newbie questions here, on wiki or via google.
I put one running kvm to HA list and it got started again, in fact restarted. Is this a correct behavior...
Hi,
try to find some logs in MTA like exim or postfix.
Anyway, I just put this line found on internet to smartd.conf, it also runs some automatic tests on all drives on weekends
DEVICESCAN -d removable -n standby -m desired@email.com -M exec /usr/share/smartmontools/smartd-runner -s...
Hi,
I created a striped zfs pool on two nodes and then put replicated glusterfs on top. This is just testing cluster so no spare drive or raidz.
I tried to pull out one drive, the pool got into unavailable state and all vms on top of glusterfs froze.
I discovered that I need to set failmode to...
It was for 4m block, for 128k is 70MB/s. I used command rados -p ssd bench 300 write -b 131072 -t 5 --no-cleanup
For 4m block are cpus around 18%, with 128k around 50% and around 100% with 8k
Yes, I put it in my configs as soon as you had written here.
Maybe there is something bad in my setup. I tried this command on one of the storage nodes: rados -p ssd bench 300 write -b 4194304 -t 1 --no-cleanup with results around 112MB/s . Iperf is showing around 9,4gb/s.
Which scheduler do...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.