Can i setup a cluster in two different datacenter ?
Serveur 1 (private ip1) ------- Firewall ------ Public IP 1 -------------- Internet -------------- Public IP 2 ------------- Firewall --------------Serveur 2 (private ip 2)
ok i finnaly found and resolve my problem.
I used rbd snapshot. so before moving on another ceph cluster, we have to delete snapshot before.
This snapshot are not seen on proxmox.
Warning : not forget to unprotect snapshot if not : error
After i can move without problem
Thanks Alwin for the...
Nice. but you have to do a vzdump every days. The differential is just to optimize space on zfs ?
So it should take a very long time to do vzdump for all datacenter right ?
Another question : can we use some technical case for backup using ceph method ?
I tried eve4pve-barc but it...
I Have two cephs cluster and i moved one vm disk from one to another with set delete the source disk.
So now i have one unused disk 0 on the old ceph pool.
I unset the protected vm and i try to delete the disk.
I Get :
Error with cfs lock 'storage-myceph': rbd snap purge...
Ok i success to create again the osd.
$ ceph osd out <ID>
$ service ceph stop osd.<ID>
$ ceph osd crush remove osd.<ID>
$ ceph auth del osd.<ID>
$ ceph osd rm <ID>
I used deleted partition on my disk and used command :
ceph osd crush remove osd.<ID>
After, in gui, create the new osd.
We upgrade from proxmox 4.4 and from hammer to luminous with success.
My ceph cluster is health.
I'm using ssd for journal
I try to migrate one osd to bluestore =>
- wait for building up
- create new osd
=> osd created but with my wal size was only 1G
I tried to delete osd an recrete them on the node 2
IOStat no sho difference with working node
On the node 2 with hdparm =>
root@GPL-HV3302:/var/log/ceph# hdparm -t -T /dev/nvme0n1
Timing cached reads: 2472 MB in 1.99 seconds = 1240.35 MB/sec
thanks for help
@spirit : yes all node are the same :
Node1 : ok => 50 vms
Node2 : cpu 30% average used => only 2 vm
Node3 : ok => 20 vms
Node4 : ok => 10 vms
Node5 : ok => 10 vms
if i move a mv from another node to node 2 : the cpu usage of the vm become 30%
We have 10G storage network with...