Search results

  1. CTCcloud

    VZDump slow on ceph images, RBD export fast

    There are others in this forum that have posted that they do their backups straight from Ceph and that it works, BUT ... that does not bring the VM config file and so on, so for disaster recovery would be super problematic. No, Proxmox was setup from the beginning to do its own backups, this is...
  2. CTCcloud

    Proxmox VE 5.0 after upgrade

    So far, I've upgraded 4 hosts ... out 4, 3 had packages that needed purging I can understand that maybe on newer systems this isn't the case but that is probably fairly rare ... there are a lot of us out there that have been using and upgrading Proxmox since version 2.x in which case, I don't...
  3. CTCcloud

    Proxmox VE 5.0 after upgrade

    As long as there's no breakage ... awesome ... thanks for the reply It's be good if this information were included within the upgrade instructions so that everyone knows that the packages left over are exactly that, left overs ... Thanks again
  4. CTCcloud

    Proxmox VE 5.0 after upgrade

    After upgrading from 4.4 easily and successfully, (so "Great Job!" to the Proxmox development team)... I see the following when removing unnecessary kernels So, the question is ... can all those be safely autoremoved? I imagine that it's left over from Jessie that CAN be autoremoved but don't...
  5. CTCcloud

    VZDump slow on ceph images, RBD export fast

    So at this point, I'm subscribed to pve-devel so thanks for that. The bugzilla doesn't seem to show any mention of the vzdump ceph slowness bug. As far as the GIT repo, I have looked at that regularly over the past couple of years and it's essentially a different view of pve-devel mailing list...
  6. CTCcloud

    VZDump slow on ceph images, RBD export fast

    Not at all ... I am not assuming that it's easy for others but not for me Perhaps I'm incorrect but it seems you're getting offended by my comments as your responses seem defensive ... perhaps it's a language barrier issue ... please don't take what I'm saying as accusatory or flippant or...
  7. CTCcloud

    VZDump slow on ceph images, RBD export fast

    My bad ... I just said that current Luminous release is 12.0.3 and it's not, it's at RC status and is at 12.1.0 ... sorry ... maybe Proxmox 5.0 and Luminous CAN come out together after all ...
  8. CTCcloud

    VZDump slow on ceph images, RBD export fast

    That would be very nice indeed if that's the case ... of course that's still some time off since Luminous hasn't yet been released (currently at 12.0.3 and needs to be at 12.2 before release) and my guess is that Proxmox 5.0 could very well be released without Luminous meaning you'd still be...
  9. CTCcloud

    VZDump slow on ceph images, RBD export fast

    Please don't misunderstand. I'm not saying VMware is "ahead" in tech but rather in acceptance and would love to see Proxmox gain some serious market acceptance to match that of the big guys ... that's all I was attempting to convey. We continue to use Proxmox because it gives very good...
  10. CTCcloud

    VZDump slow on ceph images, RBD export fast

    What Iva-a-an mentioned is exactly right, this issue NEEDS to be sorted. If Spirit is correct and it's the 64KB block issue in vzdump and not QEMU why hasn't this been fixed yet? The problem has existed now for a very long time ... not a few months or something ... I say this because if it were...
  11. CTCcloud

    RAID drive causing Ceph slowness

    Yes, I'm familiar with that fact of being able to create more than one storage config per pool but thanks for mentioning it anyway. Yeah, I still would like to know if there's anything that can be done to mitigate this issue in the future. If cfq scheduler is better for the local disk I've got...
  12. CTCcloud

    RAID drive causing Ceph slowness

    Yes, we use krbd Our environment is 95%+ KVM but we do have a couple of containers on Ceph which means krbd is required
  13. CTCcloud

    RAID drive causing Ceph slowness

    The default is in use, "deadline" I attempted to use cfq and it seemed to help just a little but didn't fix the issue. The issue didn't fix itself until the node was rebooted. What we'd like to find out is if there is a recommended scheduler for doing what we are doing or if we should look...
  14. CTCcloud

    RAID drive causing Ceph slowness

    There are 9 nodes in the cluster. 4 nodes are Ceph only nodes. The node I am speaking of is only to run VMs as a ceph client. There are no OSDs nor any Ceph configuration on this node.
  15. CTCcloud

    RAID drive causing Ceph slowness

    We had a drive go bad on a local RAID of a Proxmox node that runs most of it's VMs on Ceph with just a hand full on that RAID. We replaced the drive and it began to rebuild. The problem is that it slowed down very badly all the VMs that are Ceph as well. The IO delay through the roof at around...
  16. CTCcloud

    VZDump slow on ceph images, RBD export fast

    Are there updates on this? We're facing the same issues and as robhost mentioned, VMA files is really what's needed ultra-slow is definitely the proper description for the backups at this point coming off of Ceph backend
  17. CTCcloud

    Corosync Service won't start after node reboot

    We've observed a new issue with this "Corosync not starting after node reboot" If an IPv4 gateway isn't available upon boot up then corosync won't start ... bear in mind, IPv6 gateway is fine at this time This seems like a serious issue as there's not a guarantee that nodes will always have an...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!