So i ran that command to remove the interval, and re-enabled the alerter. Now i se this in the mgr logs :
-1 ceph_set_health_checks check ALERTS_SMTP_ERROR unexpected key count
1 client.0 error registering admin socket command: (17) File exists
-1 client.0 error registering admin socket...
So i have a proxmox ceph cluster with 4 nodes, and ive been trying to user the alerter module.
So ive configured everything that is instructed in the docs : https://docs.ceph.com/docs/master/mgr/alerts/
The problem is that when i try to execute "ceph alerts send" i get this error ...
So about the image test.
I went ahead and demoted the image on the main cluster, and promoted it on the backup cluster.
On the main cluster
rbd mirror image demote mirror1/vm-101-disk-0
On the backup cluster
rbd mirror image promote mirror1/vm-101-disk-0
I then was able to boot the vm on...
So i have followed the article mentioned by Alwin, and i am testing this solution.
So i now have one image the is syncing from the master cluster to the backup cluster.
If i run this on the backup cluster: rbd mirror pool status mirror1 --verbose
I get :
vm-101-disk-0:
global_id...
No. I tried all sorts of combinations and did not manage to set 2 ips from the gui. But i went some other way to solve my situation that does not involve 2 ips on the same nic.
Anyway, the this is i did not find a way to set multiple ips from the gui.
So there is no way to have this on unprivileged containers right, and they to do it is to use privileged container ?
But these seem to not be as safe as unpriv ones.
At least thats what the Proxmox Wiki says (https://pve.proxmox.com/wiki/Linux_Container) :
So i have this centos7 container and everything seems fine.
Now, i dont know if this is proxmox related (probably not) but i have installed some software and it checks the permissions of /dev/{null,random,urandom} and they are not how it expects them to be.
From what i understand the owner...
Well not really, but i havent been really using it. Installed it, checked it out a bit, and that it. Everything i need is already in the proxmox gui, and for everything i just use the cli commands.
Why, did you have problems ?
Im not sure, but in this setup isnt just the vmbr0 ovsbridge\switch connected to the physical port ?
I would actually need to be able to set multiple ips that can be "Accesed" on the same physycal interface. So in this config (yours) i would need to have vmbr1, 2 and 3 connected to port enp4s0 ...
So ive been searching quite a bit about this, and cannot see how i could do this.
So i have a bond that has 2 slaves (2 physical nics).
On that bond i have an ip set.
This bond is just used for ceph.
The thing is that i would want to add another ip on this bond.
Is this possible withought going...
Im glad i can help, like other people around here helped me as well.
About the hard drive thing : i dont think you can choose RAID1. You should have 2 hard drives for RAID1 and i think you will get an error if you try to go with that option.
You will probably need to go with raid0.
So is your filesystem ZFS ?
When you install the nodes you should not use ext4. You need to use ZFS.
If you go to a node, then select disks-ZFS, you should see a zfs pool.
You just select "Datacenter" and then select "Replication".
Over there you need to add an entry for each vm, and specify the target node, and how often to have it replicate.
So lets say you have vm1 on node1.
You will set its id, and node2 as its target.
If you migrate that vm manually to node2...
1. You dont need shared storage to have failover. You just need to use zfs as your filesystem, and them configure replication on the vm (have it replicate on the other node in the cluster.
2. Im not sure what you are asking here. What nodes are you talking about ? The 2 physical nodes ?!
So...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.