There are others in this forum that have posted that they do their backups straight from Ceph and that it works, BUT ... that does not bring the VM config file and so on, so for disaster recovery would be super problematic. No, Proxmox was setup from the beginning to do its own backups, this is...
So far, I've upgraded 4 hosts ... out 4, 3 had packages that needed purging
I can understand that maybe on newer systems this isn't the case but that is probably fairly rare ... there are a lot of us out there that have been using and upgrading Proxmox since version 2.x in which case, I don't...
As long as there's no breakage ... awesome ... thanks for the reply
It's be good if this information were included within the upgrade instructions so that everyone knows that the packages left over are exactly that, left overs ...
Thanks again
After upgrading from 4.4 easily and successfully, (so "Great Job!" to the Proxmox development team)... I see the following when removing unnecessary kernels
So, the question is ... can all those be safely autoremoved? I imagine that it's left over from Jessie that CAN be autoremoved but don't...
So at this point, I'm subscribed to pve-devel so thanks for that. The bugzilla doesn't seem to show any mention of the vzdump ceph slowness bug. As far as the GIT repo, I have looked at that regularly over the past couple of years and it's essentially a different view of pve-devel mailing list...
Not at all ... I am not assuming that it's easy for others but not for me
Perhaps I'm incorrect but it seems you're getting offended by my comments as your responses seem defensive ... perhaps it's a language barrier issue ... please don't take what I'm saying as accusatory or flippant or...
My bad ... I just said that current Luminous release is 12.0.3 and it's not, it's at RC status and is at 12.1.0 ... sorry ... maybe Proxmox 5.0 and Luminous CAN come out together after all ...
That would be very nice indeed if that's the case ... of course that's still some time off since Luminous hasn't yet been released (currently at 12.0.3 and needs to be at 12.2 before release) and my guess is that Proxmox 5.0 could very well be released without Luminous meaning you'd still be...
Please don't misunderstand. I'm not saying VMware is "ahead" in tech but rather in acceptance and would love to see Proxmox gain some serious market acceptance to match that of the big guys ... that's all I was attempting to convey. We continue to use Proxmox because it gives very good...
What Iva-a-an mentioned is exactly right, this issue NEEDS to be sorted. If Spirit is correct and it's the 64KB block issue in vzdump and not QEMU why hasn't this been fixed yet? The problem has existed now for a very long time ... not a few months or something ... I say this because if it were...
Yes, I'm familiar with that fact of being able to create more than one storage config per pool but thanks for mentioning it anyway.
Yeah, I still would like to know if there's anything that can be done to mitigate this issue in the future. If cfq scheduler is better for the local disk I've got...
The default is in use, "deadline"
I attempted to use cfq and it seemed to help just a little but didn't fix the issue. The issue didn't fix itself until the node was rebooted.
What we'd like to find out is if there is a recommended scheduler for doing what we are doing or if we should look...
There are 9 nodes in the cluster. 4 nodes are Ceph only nodes. The node I am speaking of is only to run VMs as a ceph client. There are no OSDs nor any Ceph configuration on this node.
We had a drive go bad on a local RAID of a Proxmox node that runs most of it's VMs on Ceph with just a hand full on that RAID. We replaced the drive and it began to rebuild. The problem is that it slowed down very badly all the VMs that are Ceph as well. The IO delay through the roof at around...
Are there updates on this? We're facing the same issues and as robhost mentioned, VMA files is really what's needed
ultra-slow is definitely the proper description for the backups at this point coming off of Ceph backend
We've observed a new issue with this "Corosync not starting after node reboot"
If an IPv4 gateway isn't available upon boot up then corosync won't start ... bear in mind, IPv6 gateway is fine at this time
This seems like a serious issue as there's not a guarantee that nodes will always have an...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.