You could check theSeems that some error happened, but not reported, and the ceph-volume package was missing.
history.log
and term.log
files inside the /var/log/apt
for what happened during the installation, there really should be an error. Also ensuring you got the right repositories configured (e.g., allbullseye
, no buster
for the PVE 7.x release)Log started: 2023-01-30 17:33:46
Selecting previously unselected package ceph-volume.^M
(Reading database ... ^M(Reading database ... 5%^M(Reading database ... 10%^M(Reading database ... 15%^M(Reading database ... 20%^M(Reading database ... 25%^M(Reading database ... 30%^M(Reading database ... 35%^M(Reading database ... 40%^M(Reading database ... 45%^M(Reading database ... 50%^M(Reading database ... 55%^M(Reading database ... 60%^M(Reading database ... 65%^M(Reading database ... 70%^M(Reading database ... 75%^M(Reading database ... 80%^M(Reading database ... 85%^M(Reading database ... 90%^M(Reading database ... 95%^M(Reading database ... 100%^M(Reading database ... 70802 files and directories currently installed.)^M
Preparing to unpack .../ceph-volume_17.2.5-pve1_all.deb ...^M
Unpacking ceph-volume (17.2.5-pve1) ...^M
Setting up ceph-volume (17.2.5-pve1) ...^M
It is supported by ceph itself, but we tested almost exclusively the Pacific to Quincy upgrade, as per our guides:Can I upgrade from Octopus directly or just I upgrade to Pacific first?
In any case, if you decide to still upgrade directly then:
In any case, if you decide to still upgrade directly then:
Yes you could, and having monitors separately is actually quite common for bigger setups. But it won't really help you in this case – as for quorum you need actual majority, so with four voting monitors you need to have 3 of 4 online to provide majority, as with 2/2 = 50%, i.e., a tie; same for the pve-cluster – which also needs quorum to run VMs.I have a fourth node that is just an old desktop, but I was wondering can I install ceph on it and have it as just a mon with no osd's on it
I could possible add a 5th, and would they only need access to the ceph-public subnet?Yes you could, ...
Yes, normally - but you can always check with something likewould they only need access to the ceph-public subnet?
ceph mon stat
(or reading the /etc/ceph/ceph.conf
) for what addresses the monitors actually use.Rather the wrong thread to ask, but historically we only packaged Ceph publicly once there was an actual stable release, i.e., from the 18.2.x branch.Any TBA for 18.x in testing-repo?
Start-Date: 2023-07-04 13:07:51
Commandline: apt -y full-upgrade
Upgrade: ceph-mgr-modules-core:amd64 (16.2.13-pve1, 17.2.6-pve1+3)
Remove: ceph-mgr:amd64 (16.2.13-pve1), ceph:amd64 (16.2.13-pve1)
End-Date: 2023-07-04 13:07:54
Start-Date: 2023-07-04 13:24:33
Commandline: apt -y full-upgrade
Install: qttranslations5-l10n:amd64 (5.15.2-2, automatic), ceph-volume:amd64 (17.2.6-pve1, automatic), libfmt7:amd64 (7.1.3+ds1-5, automatic), libthrift-0.13.0:amd64 (0.13.0-6, automatic), libqt5core5a:amd64 (5.15.2+dfsg-9, automatic), libqt5network5:amd64 (5.15.2+dfs
g-9, automatic), libqt5dbus5:amd64 (5.15.2+dfsg-9, automatic), libdouble-conversion3:amd64 (3.1.5-6.1, automatic), libpcre2-16-0:amd64 (10.36-2+deb11u1, automatic)
Upgrade: librados2:amd64 (16.2.13-pve1, 17.2.6-pve1), ceph-fuse:amd64 (16.2.13-pve1, 17.2.6-pve1), ceph-base:amd64 (16.2.13-pve1, 17.2.6-pve1), python3-ceph-common:amd64 (16.2.13-pve1, 17.2.6-pve1), librbd1:amd64 (16.2.13-pve1, 17.2.6-pve1), librgw2:amd64 (16.2.13-pve
1, 17.2.6-pve1), ceph-common:amd64 (16.2.13-pve1, 17.2.6-pve1), ceph-mds:amd64 (16.2.13-pve1, 17.2.6-pve1), ceph-mon:amd64 (16.2.13-pve1, 17.2.6-pve1), ceph-osd:amd64 (16.2.13-pve1, 17.2.6-pve1), python3-cephfs:amd64 (16.2.13-pve1, 17.2.6-pve1), libcephfs2:amd64 (16.2
.13-pve1, 17.2.6-pve1), libradosstriper1:amd64 (16.2.13-pve1, 17.2.6-pve1), python3-rbd:amd64 (16.2.13-pve1, 17.2.6-pve1), python3-rgw:amd64 (16.2.13-pve1, 17.2.6-pve1), libsqlite3-mod-ceph:amd64 (16.2.13-pve1, 17.2.6-pve1), python3-ceph-argparse:amd64 (16.2.13-pve1,
17.2.6-pve1), python3-rados:amd64 (16.2.13-pve1, 17.2.6-pve1)
End-Date: 2023-07-04 13:24:56
Start-Date: 2023-07-04 13:46:15
Commandline: apt-get -y autoremove
Remove: python3-paste:amd64 (3.5.0+dfsg1-1), python3-webtest:amd64 (2.0.35-1), ceph-volume:amd64 (17.2.6-pve1), python3-dateutil:amd64 (2.8.1-6), ceph-mgr-modules-core:amd64 (17.2.6-pve1+3), python3-cherrypy3:amd64 (8.9.1-8), cryptsetup-bin:amd64 (2:2.3.7-1+deb11u1),
python3-repoze.lru:amd64 (0.7-2), python3-waitress:amd64 (1.4.4-1.1+deb11u1), python3-logutils:amd64 (0.3.3-7), python3-werkzeug:amd64 (1.0.1+dfsg1-2), python-pastedeploy-tpl:amd64 (2.1.1-1), ceph-mon:amd64 (17.2.6-pve1), ceph-osd:amd64 (17.2.6-pve1), python3-lxml:amd
64 (4.6.3+dfsg-0.1+deb11u1), python3-routes:amd64 (2.5.1-1), python3-soupsieve:amd64 (2.2.1-1), python3-pyinotify:amd64 (0.9.6-1.3), python3-bs4:amd64 (4.9.3-1), python3-simplegeneric:amd64 (0.8.1-3), python3-webencodings:amd64 (0.5.1-2), python3-pecan:amd64 (1.3.3-3)
, python3-singledispatch:amd64 (3.4.0.3-3), python3-pastedeploy:amd64 (2.1.1-1), python3-pastescript:amd64 (2.0.2-4), libjaeger:amd64 (16.2.13-pve1), sudo:amd64 (1.9.5p2-3+deb11u1), python3-bcrypt:amd64 (3.1.7-4), python3-html5lib:amd64 (1.1-3), python3-webob:amd64 (1
:1.8.6-1.1), libsqlite3-mod-ceph:amd64 (17.2.6-pve1), python3-tempita:amd64 (0.5.2-6), python3-simplejson:amd64 (3.17.2-1)
End-Date: 2023-07-04 13:46:24
Start-Date: 2023-07-04 13:46:48
Commandline: apt --reinstall install ceph-mgr ceph-mon ceph-osd cryptsetup-bin sudo ceph-mgr-modules-core
Install: python3-paste:amd64 (3.5.0+dfsg1-1, automatic), python3-webtest:amd64 (2.0.35-1, automatic), ceph-volume:amd64 (17.2.6-pve1, automatic), python3-dateutil:amd64 (2.8.1-6, automatic), ceph-mgr-modules-core:amd64 (17.2.6-pve1), python3-cherrypy3:amd64 (8.9.1-8,
automatic), cryptsetup-bin:amd64 (2:2.3.7-1+deb11u1), python3-repoze.lru:amd64 (0.7-2, automatic), python3-waitress:amd64 (1.4.4-1.1+deb11u1, automatic), python3-logutils:amd64 (0.3.3-7, automatic), python3-werkzeug:amd64 (1.0.1+dfsg1-2, automatic), python-pastedeploy
-tpl:amd64 (2.1.1-1, automatic), ceph-mgr:amd64 (17.2.6-pve1), ceph-mon:amd64 (17.2.6-pve1), ceph-osd:amd64 (17.2.6-pve1), python3-lxml:amd64 (4.6.3+dfsg-0.1+deb11u1, automatic), python3-routes:amd64 (2.5.1-1, automatic), python3-soupsieve:amd64 (2.2.1-1, automatic),
python3-pyinotify:amd64 (0.9.6-1.3, automatic), python3-natsort:amd64 (7.1.0-1, automatic), python3-bs4:amd64 (4.9.3-1, automatic), python3-simplegeneric:amd64 (0.8.1-3, automatic), python3-webencodings:amd64 (0.5.1-2, automatic), python3-pecan:amd64 (1.3.3-3, automat
ic), python3-singledispatch:amd64 (3.4.0.3-3, automatic), python3-pastedeploy:amd64 (2.1.1-1, automatic), python3-pastescript:amd64 (2.0.2-4, automatic), sudo:amd64 (1.9.5p2-3+deb11u1), python3-bcrypt:amd64 (3.1.7-4, automatic), python3-html5lib:amd64 (1.1-3, automati
c), python3-webob:amd64 (1:1.8.6-1.1, automatic), libsqlite3-mod-ceph:amd64 (17.2.6-pve1, automatic), python3-tempita:amd64 (0.5.2-6, automatic), python3-simplejson:amd64 (3.17.2-1, automatic)
End-Date: 2023-07-04 13:47:1
Upgrades from Pacific to Quincy:
You can find the upgrade how to here: https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
I have couple of questions regarding this guide.
It is not very clear if this upgrade can be conducted:
- with the VM on the node turned "on"
- with the VM on the node mirgrated on another node (= no VM on host being upgraded)
Also regarding parallel upgrades, can we:
- upgrade the nodes one after the other (= finish all steps described in your guide, then move to another node)
- do we have to upgrade the nodes in parallel (= do step by step on all nodes in //)
- can this be done with the VM turned "on" while doing the upgrade
Thanks for your reply.
All looked good after the upgrade, but on the next reboot (which I did right after the upgrade) the osd had not started. I saw this post, and found that the ceph-volume package was not installed. Installed it and rebooted, and everything was good again.Seems that some error happened, but not reported, and the ceph-volume package was missing.
Code:apt install ceph-volume ceph-volume lvm activate --all
fixed the problem.
Hi Jsterr,As you have the possibilty to live-migrate VMs to a different node, I would consider doing that while upgrading. I would always empty the node, as you usually need to reboot the node after upgrades.
You should also upgrades nodes after another, no steps are done in parallel. VMs can be online but not on a node that is currently upgrading. Move VMs to one host, upgrade host reboot, move VMs to next host, upgrade host reboot, move VMs to first host (that is already upgraded), upgrade third host, reboot.
After each reboot, wait for a full ceph health recovery. Parallel reboots (if you waited not long enough) could lead to quorum loss in storage -> storage downtime.
Hi Jsterr,
I'v never done it before but clearly you have. Updating Proxmox to version 7 went without problems, Now i need to update Ceph to Pacific. Wiki Ceph Octopus to Pacific is pretty clear but still i am wondering what the proper steps are:
I have a 3 node cluster.
1. move all running vm's to node 1.
2. set noout flag.
3. Upgrade node 3 with the commands apt update and then apt full-upgrade
4. Reboot node 3, wait for full Ceph recovery
5. Upgrade node 2 with the commands apt update and then apt full-upgrade
6. Reboot node 2, wait for full Ceph recovery
7. Move all vm's to node 2.
8. Upgrade node 1 with the commands apt update and then apt full-upgrade
9. Reboot node 1, wait for full Ceph recovery
10. Monitor and Manager are upgraded after reboot each node i assume?
11. Format conversion per OSD is also done when you reboot the node? ofcourse one node at a time.
12. After reboot node check ceph status
13. ceph osd require-osd-release pacific
14. unset noout flag.
What about virtual machines that are turned off, do i need to move them as well?
Thanks in advance for your time and advice.