I've actually read just about everything that Mr. Han has written so far. Many of his tweaks and hints are what led to the performance we can get out of ceph today.
Allow me to clarify the issue further:
When vzdump runs a backup of a running VM, it uses qemu directly, and through it, the...
Yeah, you got it. I have multiple storage options that I've been using for testing. The specific hardware I listed in the original post is what is running the cloud, or at least what was intended for that purpose. In the process of testing ceph, I have also tested various other possible...
Yes, jumbo frames are configured and in use on all devices.
Using flat file testing, I can hit the limit of the test NFS server's storage write IO (about 255MB/s on a good day) before I hit the max speed of the network (12.5 GB/s on the ceph nodes, and 5GB/s on the proxmox nodes). I've seen as...
mir,
I saw this post this morning and have been testing using this option. It "appears" that vzdump ignore this when running against KVM VMs, as opposed to VZ containers like in your example.
This is the new setup with 10Gbe and ceph -> 2 -> 10Gbe NFS:
root@Cloud01:/etc# vzdump 1062 --remove...
Ok, lets start off with some background. We have been using Proxmox since v 1.x, and quite successfully I might add. We've been happy with the performance, useability, features, the whole 9 yards.
Our Current "production" setup:
11 Nodes total
6x Nodes = 2x16-Core AMD Opteron w/ 256GB of memory...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.