I'm trying to mount a CephFS of a Mimic Cluster with a Luminous Client on a PVE 5.2 node, but are seeing this:
Same mount works just fine on the Mimic Cluster CentOS7.5 nodes:
It worked fine to do in place upgrade from 4.4 to 5.2, only properly due to the SW watchdog server suddenly rebooted during the pve-cluster upgrade, but dpkg --configure -a continued successfully the upgrade.
After reboot one service fails on 1. boot (maybe due to networking.service failing...
Attempted to compile Ceph Jewel from source, but it is hard to near impossible to compile under Wheezy as it's gcc < 4.8 which are missing 'emplace' a C++11 feature among other things... Will properly attempt with a PVE 5.x node Jewel client
Right that might be another path, remove a node from the current old 3.4, install it as a first 5.x node w/Ceph Jewel as client and connect to both Ceph Clusters and manually copy vmid.conf files between PVE nodes.
Gregory Farnum@Redhat seems to think so:
>Got an old Hammer Cluster where I would like to migrate it’s data (rbd images) to a newly installed Mimic Cluster.
>Would this be possible if I could upgrade the clients from Hammer to Jewel(ie. would Jewel be able to connect to both clusters)?
Yes...
I don't think so, possible only ceph-deploy not all other needs packages ImHO
I also tried EU mirror with:
deb http://eu.ceph.com/debian-jewel wheezy main
but this don't update my ceph packages from Hammer on apt-get update + upgrade :/
Got VM backups dumped on two CephFS, just wants to release old Hammer HW to build a new PVE 5.x cluster and then connect that to the new Mimic Cluster :)
3.4 got GUI support for Ceph, but it also works fine to manual add in storage.cfg, thanks I forgot about the naming schem for keyring(s)...
Anyone knows where I might find Jewel packages for Debian Wheezy as they seems EoL
apt-get update against:
deb http://ceph.com/debian-jewel wheezy main
=>...
Want to upgrade an old 3.4 testlab connected to a Hammer Ceph cluster (i know :)
Plan is first to migrate VM images to a newly installed Ceph Mimic cluster, would it be possible to connect to both Ceph Cluster (eg. maybe by upgrading Ceph Client to +Jewel)?
Also what about openvswitch usage, all our single teant VMs are connected to one OVS switch and vlan tagged differently. Would OVS from Strech connect fine to version in 4.4?
Of course
Thanks good to know!
Good idea, only haven't got other host to connect to the physical network, worst case we would just have to fall back to recover it as a reinstalled 4.4 node I guess.
What about our present paid support/licenses will they work on 5.x as well, ie. no need to...
Thinking it time to consider doing an upgrade from jessie 4.4 to latest 5.1 by following this 'in place upgrade' procedure and wondering if it could be an issue that we're using two corosync rings, HA clustering and shared storage from iSCSI array only?
If anything were to go wrong, could we...
Just attempted to patch an older testlab PVE 3.4 to latest patch levels.
Found a newer kernel pve-kernel-2.6.32-48-pve only when booting on this our openvswitch looked fine but could get traffic in/out through a bonded NIC plugged into the single vmbr1 ovs and thus no access to the ceph cluster...
Also wondering why below pve.conf requested sysctl settings are different, might it be because we use the pve-firewall and thus needs bridge to call out to host iptables before parse packets to VM guests?
Hm don't seem to be able to find what goes into /etc/modprobe.d/what-ever-name-choosen.conf to make nf_conntrack load early at boot...
manpage seem says: <command> <module_name> [options] only not whether command=install would force a load or just simply specify what command to use when loading...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.