Hey Guys,
We have a LXC container which is about 2TB large, When backing up this server though it takes about 7+ hours to backup.
While I have several VM's which only take about 20 minutes to backup but is 5-6TB large...
Why are LXC's so slow to backup?
Do you mean it will only decrease the memory on the VM if the parent NODE's memory usage is 80%?
Or do you mean if the VM reaches 80% it will increase the memory allocated?
Hey bud,
So first of all thank you, That guide was better than the ones I found since the ISO actually was there :).
The only problem I have though, I setup the guest agent and the virtio balooning drivers etc. I can ping the agent too so it looks like it's working, but if I check the parent...
Hey Guys,
So i've been reading up on balooning a bit, and I'm a bit confused though.
We have Windows servers setup which need to have 120GB RAM but they only use about 40GB currently.
So I'd like to clear up this RAM. But I can only find guides for setting up the balooning during install. How...
Hey Guys,
Our backups are failing, Can someone please help advise?
We are getting the below log outputs and I can't get the backup to succeed...
INFO: starting new backup job: vzdump --mailnotification failure --node c6 --compress zstd --mode snapshot --storage backups --all 1 --exclude...
Ok guys so first of all thanks for all the info. I used a lot of it to correct our cluster's config...
Turns out the major issue was with 1 of our network ports reporting as a 100Mb connection. That slowed everything down to NULL!
Luckily we found it. Thank you very much for the advise & info :)
Thank you. Ok so I created a new Ceph Pool. And I'm busy migrating everything to the other pool instead of rebuilding & resyncing.
Only problem now (Potential Problem), Is in Proxmox I am only seeing the first Ceph Pool under Node->Ceph->OSD's is that normal?
It is spinners for the moment yes. SSD's this side are CRAZY expensive.
THe 3 are intented to be used to start a new ceph array but I'll need to move them bit by bit since there is also the issue of space, we're a bit limited a.t.m
The HW is setup as 3 Server Nodes each running an EMC2 machine...
I have removed 3 of the osd's to start migrating data over. and we're adding 6 more this afternoon so we have enough room to start moving the data.
ceph -s
cluster:
id: 248fab2c-bd08-43fb-a562-08144c019785
health: HEALTH_WARN
1 pool(s) have no replicas configured...
I know... Hence my fear....
The problem is that they all go offline. And take way too long to come online. A couple minutes is one thing. but hour's is a problem since client's need to be able to access their data? Which is why I'm looking for a work-around that keeps the data accessible...
Will send the details through as soon as I'm at the office.
In the meantime though. I do have 1 question.
When I try and change the cluster to 2/1 (to incrementally increase it and eventually end up at 3/2) the entire cluster immediately stops being able to read and write...
It just says all...
Oddly enough I just noticed that if I set my cluster replication to 1/2 instead of 1/1 then all the pg's become unavailable.
I had to set it to 1/1 since it was the only way to get the cluster usable...
Any idea of what might be causing this behaviour?
I'm very worried about the data's safety...
Hey Guys,
Ok so I have 3 Nodes in my cluster with about 50TB storage and 20TB used.
When my ceph starts recovering like now when we have a failed drive, all the VM's load jumps in the 100's and they become close to unusable.
I have configured my proxmox as follows:
1 IP range for live...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.