So, if I want to migrate over to larger drives, what would the best-practice approach to that be? Still attempt to keep the same amount of storage provided per host to remain balance (even if it affects performance a little)? Or is the number of OSDs per host being the same more important? (I...
Ok - understood. I think a solution in my case will be to put Proxmox on M.2 SSDs on an PCIE adapter and boot from that, freeing up 6 2.5" drives on all servers in the cluster. I think this is what I will do. Thank you!
Dumb newb question I'm sure, but when creating an OSD with pveceph, will it automatically sort out the CRUSH map stuff when using different sized OSDs or a different number of OSDs per host?
Example:
I've got 4 hosts that have 8 2.5" bays, two I use in RAID1 for Proxmox boot, the other 6 I am...
Frank, thank you for your script. It is working well for me to allow for fast daily backups, I'm looking to make Ceph my primary storage because of this (and that Ceph can do snapshots) instead of iSCSI.
Do you foresee the ability to do continuous/synthetic fulls (never have to run a 'full'...
Thanks Alex. I may have mis-stated what I had read, I think the primary OSD for every object was on an SSD, I'm not sure the exact crush configuration details, but the effect was the all writes went to the SSDs and then replicated to the spinners (with SSD WAL), and all reads went to the SSD...
To add for posterity in case anyone else is googling this topic down the road. Here is probably the single biggest risk from some back-reading I've done on the Ceph-users list (credit goes to Ceph-user member Wido for explanation, I'm re-stating in my own words)
In a 2/1 scenario even with...
Thanks PigLover. I had been thinking 4-2 EC pool for RBD, I had heard it got a bad rap with a cache tier in front of it, but everyone's use case is a bit different. I have thought it over and I agree, I think even with SSDs having to do (in 4-2) 6 reads to write a stripe (to 6 OSDs) for one...
I agree, that's the standard calculation. But RAID10 seems to be acceptable for most people as the chances of a fault are statistically quite low as long as you aren't using enormous drives (or consumer drives with a lower BER). I'm trying to understand (as I somewhat but obviously not fully...
Is it common for an object to be not valid? If I compare a ceph cluster on relatively reliable (dual Power supply, ECC RAM, UPS backed) servers with redundant switches/links, and enterprise grade MLC SSDs in the 400-500GB range, is my risk of data loss (roughly speaking) with Size=2 Minsize=1...
I do see posts saying Size=2 minsize=1 is a bad idea. But some of the "worst" reasons this is/was a bad idea (data inconsistency if there are two mismatched copies of data because a rebalance started or writes happened and then an OSD comes back to life or something..) maybe seem to be...
Is there a way to hack in support for direct-write EC pools in Ceph? I think the barrier presently is that we can't specify the data pool (since direct write EC pools still need to use a replicated pool for metadata). I feel for smaller networks this might help with throughput (halving the...
It's probably because the metadata for EC images still needs to be in a replicated pool and Proxmox will need special hooks for this when creating images within the EC RBD pool (just a guess though).
I came across this old post searching for answers myself to the same issue, so should someone else come across this post searching for answers to the same problem, I thought I'd add it (though I'm sure it's too late to help OP, lol).
The solution it turns out is that you need to blacklist your...
I'm pretty much finding the same thing. I had set up PRoxmox a couple versions ago and no issues with Corosync (other than forgetting to add the cluster members to all the /etc/hosts file). But the only way I've gotten it to work properly on the latest version is manually hack everything. I...
I'm looking to do the same thing. I'd like to be able to access my cluster without always needing to VPN in, and would like to use client certs. I've done this to protect less than secure web applications using Nginx as a reverse proxy. Only problem is, Proxmox 5.1 and Nginx reverse proxy...
Figured out the problem. So first I had to use Xenmigrate python script to go from an XVA file to a raw img file. And then I went and converted it to a qcow2 with qemu-img convert, and then I imported it into Proxmox with qm importdisk. It booted but bluescreened. I'll go into the vm, remove...
I'm in the same boat- moving from XenServer to ProxMox. I've migrated my XenServer VMs to NFS (which Proxmox can also see). However, when I try to do the qm import of the XenServer VHD files, Proxmox won't boot them. I've tried multiple Windows and Linux vhds. Anyone run into this and know...
It seems for large volumes of backups there are two options- in-guest agent that does continuous synthetic full incrementals (Cloudberry does this with AWS S3 backend), or to do the backups at the SAN level and trigger quiescing in the VM for consistency. But it looks like in-guest backup is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.