I'm looking for a guide) on how to copy from an existing pool to a new pool.
1. if the source is KVM/LXC images?
2. If the source is CephFS? CephFS + EC?
TheGoogle's have not provided any solid directions, old threads,(sounds like cppool is out of vogue) and I'm sure it's something the CEPH...
Stealing some of the pieces from another thread - just went through this myself, figured I'd share what worked 100% - about 20 drives completed so far, zero issues.
IMPORTANT: this assumes DB/WAL are on a single physical drive. If they aren't, you'll have to consolidate them down first, then...
Curious where you ended up on this @adriano_da_silva - considering migrating one of my clusters from NVME DB/WAL + Spinners, to bcache NVME+Spinners.. Curious what 6 months of experience had done to your views.
realize this is a late answer, but ran into this thread as I prepare to re-pool my metadata.
Short answer: Each pool must be "enabled" for specific applications. When you create the pool via the UI or command line, it defaults to RBD. You can see which applications are enabled for your pool...
Fantastic, thank you - will give it a shot - The option I suggested above seems to work 80% of the time, but leaves at least one node with no started services. :(
In larger clusters, it can be quite a few seconds until all the OSD's are happy and cephfs is able to mount. Just restarted one cluster today (power loss) and noticed that while all the KVM's started fine, any LXC that used a CEPHFS bind-mount wouldn't start until cephfs was ready. (got unable...
Same EXACT problem here, never ran into this before, but it's been a few months since i had a drive fail.. somewhere along the lines this came in as defaults.. been PULLING MY HAIR OUT trying to figure out what was going on. Stumbled across the values in the "CONFIGURATION DATABASE" section...
Just to make sure I understand this correctly:
If I remove all the HA-configured LXC/KVM settings (I have DNS servers, video recorders, etc) and make them stand alone no-failover configs, it won't fence if Corosync gets unhappy? (That doesn't seem to ring true to me in a shared storage world.)
Thanks, gave it some thought, and changed the priorities a bit - we'll see if it does better than it has in the past. (Also has me thinking about things that could lower the latency between nodes, like MTU on the ring interfaces)
It would be nice to gather raw data on keepalives across all...
Sorry to necro this thread, but it's one of *many* that come up with this title, and it's directly to the core issue.
Proxmox needs to have an configurable option for behavior on Fencing. Rebooting an entire cluster upon the loss of a networking element is the sledgehammer, and we need the scalpel.
Thank you, fantastic information, already used it to clean things up a bit.
Not if PMX thinks we need to reboot. So far, none of the failures have taken down CEPH, it's pmx/HA that gets offended. (Ironic, because corosync/totem has (4) rings, and CEPH sits on a single vlan, but I digress)
The...
That makes sense, thank you, I didn't understand the corosync/totem/cluster-manager inter-op. (Is this written up anywhere I can digest?)
I'll drop the timeouts back to default values. Since I know how to cause the meltdown, it will be easy to test the results of the change.
How would you...
RE: pmx2 - Good catch - no that wasn't intentional, fixing it already. From a network standpoint 198.18.50-53.xxx can all ping each other, so the network pieces, yes were all operational. Based on the config however, it looks like pmx2 wasn't on ring2 correctly. That in and of itself shouldn't...
Here's what the same event looked like from pmx4 (node 3)
Oct 03 23:17:58 pmx4 corosync[6951]: [TOTEM ] Token has not been received in 4687 ms
Oct 03 23:17:58 pmx4 corosync[6951]: [KNET ] link: host: 6 link: 0 is down
Oct 03 23:17:58 pmx4 corosync[6951]: [KNET ] link: host: 6 link: 1 is...
For reference, from a topology standpoint, pmx1/2/3/4/5 (nodes 6,5,4,3,2) sit in the same rack, whereas pmx6/7 (nodes 1,7) sit in another room, connected to different switches with shared infra between.
root@pmx1:~# pveversion
pve-manager/7.2-11/b76d3178 (running kernel: 5.15.39-3-pve)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.