I thought I'd share some further results as I think they're interesting and they may be of use to someone else. These are the results of the benchmark running over a 40GbE switched network (OM3 fibre). This is the same equipment as my post on 15 Oct with the network moved from 10GbE to 40GbE...
But then how does a customer restore their own backup through the UI (or whmcs module etc)? Unless I've missed something (and I'm new here that's quite possible) your restore process wouldn't know about our "magic" so it couldn't restore from it. I thought a script to unpack and then a...
Hi Wolfgang
The use case I was thinking of was still around dedup of the backups. With a post-backup hook we can unpack the VMA into raw images that dedup very well. If we could then have a pre-restore hook, we could rebuild the VMA so the proxmox processes just see a VMA being written out...
Hi
Having a hook script on vzdump is great as it offers heaps of flexibility for post-processing the backup without modifying the code. Is there a way to have a hook script on qmrestore so we can pre-process prior to performing the restore? And a hook that's called when a backup is deleted...
I just ran a comparison with the benchmark running on just 1 node, and then the benchmark running on all 4 nodes to simulate heavy workloads across the entire cluster. Not only did the average IOPS drop as you'd expect, but the average latency jumped due to queueing.
1 x bench over 10GbE
Max...
We have some spare 40GbE switch ports so I've ordered some nics for our servers. Early next week I should have a benchmark using 40G to compare to the 10G benchmark from the other day. Should be interesting as I was maxing out the public ceph network as Alwin thought I would be.
David
...
It's interesting that with 4 nodes on a 10GbE network our numbers are significantly higher than the ones in the benchmark report. If the network is maxed out maybe our 10 Gigs is quicker than your 10 Gigs :)
I don't have our normal monitoring on this gear yet. I'll get that in place tomorrow...
Here's another benchmark result for the records. This is a 4 node test platform, using 4 x 2TB Intel P4510 NVMe drives per node. Each drive is configure with 4 OSDs and the pool has 3 copies of the data. It's configured with 4096 PGs based on the results of the pg_calculator but I'm happy to...
Hi
Our evaluation of Proxmox & Ceph is going well so we looked at how we'd migrate our existing customer base over to it. We're coming from a KVM based platform, using Virtio, so all looked good. The only issue is that the NIC is presented at a different PCI address so windows clients see it...
No worries Damon, I've been benchmarking a new Ceph setup today so I've been in and out of those settings for the last few hours. Glad I could help.
David
...
This is resolved. I went looking for anything at a system level that could block IO (rather than it being a ceph problem). We had created and then deleted an NFS storage target for backups. The storage isn't visible in the UI but it was still mounted on all the PVE cluster nodes for some...
Hi
I tried to bring up a test platform for PVE and Ceph today. It's a 4 node cluster with nvme drives for data storage. All was fine until I tried to create the OSDs. Via the web UI I picked the first data drive on a couple of the cluster nodes and selected them as OSDs. 8 hours later and...
Thanks Thomas, I hadn't seen the '--osds-per-device' option to ceph-volume. That simplifies things a lot. We'll start running this up on Friday. Once we're happy with the configuration we'll contribute back to the ceph benchmark thread to share our results.
David
...
Hi
We'll be evaluating proxmox & ceph over the coming weeks and want to ensure we have a good starting point for benchmarking. We've been running a hyperconverged all-flash platform for about 7 years but it's not based on ceph. We're reading heaps trying to understand the best deployment...
Hi Wolfgang,
So in this case, if we ran a job once the backup file had been written that copied the resulting file to a tmp file on the ZFS target, deleted the original, and renamed the tmp file back to the original file we'd get in-order writes. It's not ideal but could give us the dedup...
Hi
We're about to run up and test a very similar environment later in the week. We looked at dedup for backups on a different hosting platform a while back using VDO on Linux. We found that even with small changes to the snapshot, any form of compression of the file changed it so...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.