Did not fix the problem.
```
net_packets.vmbr0CHART
inbound packets dropped ratio = 0.17%
the ratio of inbound dropped packets vs the total number of received packets of the network interface, during the last 10 minutesALARM
vmbr0FAMILY
Will look later today.
Curious, if it's the actual mount statement that's the problem. For example, once it's mounted, it lists all (5) hosts. Could it be any single host missing prevents the mount?
198.18.53.101,198.18.53.102,198.18.53.103,198.18.53.104,198.18.53.105:/ 50026520576...
I'll be rebooting the cluster again today (it is a lab after all) but here's the "current" status with all (5) nodes up and everything happy.
~# ceph fs status
cephfs - 7 clients
======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active mx4 Reqs: 0 /s 83.2k 48.0k...
5 node deployment in lab, noticed something odd.
Cephfs fails to mount on any node nodes until *ALL* nodes are up. IE 4 of 5 machines up, cephfs still fails.
Given the pool config of cephfs_data and cephfs_metadata (both 3/2 replicated) I don't understand why this would be the case.
In theory...
Nice work - seems like something that would be great integrated into the PMX GUI - have an "evacuate node" right-click-menu option.
Spacing needs a little help for larger hosts, this is my lab:
|Memory (GB)
hostname | total free used | CPU
pmx1...
.... so i tried it again today, and **magic** -- it created the mount matching the name under /mnt/pve - and mounted on all clients.
Thanks pmx team - well done.
Honestly, as much as I love/use proxmox, for the scale you're talking about, openstack might be a better fit - lots of multi-site tools available for that env, today.. just get your checkbook out..
So this feature appears to be functional, (or mostly so) in 7.x - you can create a secondary cephfs, it creates the data/meta pools, finds open mds servers, and starts.. only it's not mounted any where?
I would have assume it was created/mounted under /mnt/pve - but no dice. I'm guessing doing...
The root problem here appears to have been that I *ever* overrode netnames, because ever after, it wants to use existing or db, as the 99-default.link file indicates.
NamePolicy=keep kernel database onboard slot path
Finally in the home stretch, automatic setting not working, but able to force it to act the right way by creating links for each interface, and letting systemd handle it.
# more /etc/systemd/network/10-enp7s0f0-mb0.link
[Match]
OriginalName=*
Path=pci-0000:07:00.0
[Link]
Description=MB.LEFT...
On a good path, getting closer to where I want to be.
Problem 1 (not pmx's fault)
* had a boot drive (zfs mirror) fail - missed a step in the cloning, so while pmx was BOOTING off a drive, it was only writing grub changes to the OTHER drive. This made me chase my tail for hours on why changes...
No love so far.
[ 13.758120] ixgbe 0000:09:00.1 eth12: renamed from eth2
[ 13.780906] ixgbe 0000:09:00.0 eth11: renamed from eth0
[ 13.812475] igb 0000:07:00.1 eth2: renamed from eth3
Found this too - removing and rebooting:
/lib/systemd/network# more 99-default.link
# SPDX-License-Identifier: LGPL-2.1-or-later
#
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public...
used to hate ifnames, trying to get over it.
Migrating 100% functional hosts, which had "net.ifnames=0" in grub, and 70-persistent-net-rules set up. Removed both things, updated init, and they stubbornly refuse to rename to enp* style names, after multiple reboots.
set up explicit LINK files...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.