Hello
I have the same problem with a ceph storage.
I thought the backup is done via a snapshot. In this case, a write in the VM should not impact the backup and vice versa?
Hello,
Can i setup a cluster in two different datacenter ?
Serveur 1 (private ip1) ------- Firewall ------ Public IP 1 -------------- Internet -------------- Public IP 2 ------------- Firewall --------------Serveur 2 (private ip 2)
Thanks
ok i finnaly found and resolve my problem.
I used rbd snapshot. so before moving on another ceph cluster, we have to delete snapshot before.
This snapshot are not seen on proxmox.
Warning : not forget to unprotect snapshot if not : error
After i can move without problem
Thanks Alwin for the...
Hello,
Nice. but you have to do a vzdump every days. The differential is just to optimize space on zfs ?
So it should take a very long time to do vzdump for all datacenter right ?
Another question : can we use some technical case for backup using ceph method ?
I tried eve4pve-barc but it...
Hello,
I Have two cephs cluster and i moved one vm disk from one to another with set delete the source disk.
So now i have one unused disk 0 on the old ceph pool.
I unset the protected vm and i try to delete the disk.
I Get :
Error with cfs lock 'storage-myceph': rbd snap purge...
Ok i success to create again the osd.
$ ceph osd out <ID>
$ service ceph stop osd.<ID>
$ ceph osd crush remove osd.<ID>
$ ceph auth del osd.<ID>
$ ceph osd rm <ID>
I used deleted partition on my disk and used command :
ceph osd crush remove osd.<ID>
After, in gui, create the new osd.
Hello,
We upgrade from proxmox 4.4 and from hammer to luminous with success.
My ceph cluster is health.
I'm using ssd for journal
I try to migrate one osd to bluestore =>
- out
- stop
- wait for building up
- destroy
- create new osd
=> osd created but with my wal size was only 1G
I...
Thanks for your help.
Finally, the problem was the ssd. We changed it and all become ok.
We tested it with
# hdparm -t -T /dev/nvme0n1
So, result was half that a working node
Thanks for you help.
I tried to delete osd an recrete them on the node 2
Not better
IOStat no sho difference with working node
On the node 2 with hdparm =>
root@GPL-HV3302:/var/log/ceph# hdparm -t -T /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 2472 MB in 1.99 seconds = 1240.35 MB/sec
Timing buffered...
thanks for help
@spirit : yes all node are the same :
Node1 : ok => 50 vms
Node2 : cpu 30% average used => only 2 vm
Node3 : ok => 20 vms
Node4 : ok => 10 vms
Node5 : ok => 10 vms
if i move a mv from another node to node 2 : the cpu usage of the vm become 30%
We have 10G storage network with...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.