Adding new PVE 5.1 Ceph Luminous node - different packages

Jun 8, 2016
344
75
93
48
Johannesburg, South Africa
We have 5 existing PVE 4.4 nodes which were upgraded to PVE 5.1 with Ceph Luminous. We deployed a new PVE 5.1 node from scratch and successfully joined the existing cluster.

Subsequently ran 'pveceph install -version luminous' which ended up installing different packages to those on the existing PVE 5.1 Ceph Luminous nodes. How do I correct this?

Probably related to the instructions here (https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0) not updating /etc/apt/sources.list.d/ceph.list

Existing nodes:
/etc/apt/sources.list.d/ceph.list
Code:
deb http://download.ceph.com/debian-luminous jessie main

New nodes:
/etc/apt/sources.list.d/ceph.list
Code:
deb http://download.proxmox.com/debian/ceph-luminous stretch main


Herewith the diff between 'dpkg-query -l' on an upgraded and freshly installed node:
Code:
[root@kvm5f ~]# diff -uNr kvm5e kvm5f | grep ceph
-ii  ceph                                 12.2.1-1~bpo80+1               amd64        distributed storage and file system
-ii  ceph-base                            12.2.1-1~bpo80+1               amd64        common ceph daemon libraries and management tools
-ii  ceph-common                          12.2.1-1~bpo80+1               amd64        common utilities to mount and interact with a ceph storage cluster
-ii  ceph-fuse                            12.2.1-1~bpo80+1               amd64        FUSE-based client for the Ceph distributed file system
-ii  ceph-mds                             12.2.1-1~bpo80+1               amd64        metadata server for the ceph distributed file system
-ii  ceph-mgr                             12.2.1-1~bpo80+1               amd64        manager for the ceph distributed storage system
-ii  ceph-mon                             12.2.1-1~bpo80+1               amd64        monitor server for the ceph storage system
-ii  ceph-osd                             12.2.1-1~bpo80+1               amd64        OSD server for the ceph storage system
+ii  ceph                                 12.2.1-pve3                    amd64        distributed storage and file system
+ii  ceph-base                            12.2.1-pve3                    amd64        common ceph daemon libraries and management tools
+ii  ceph-common                          12.2.1-pve3                    amd64        common utilities to mount and interact with a ceph storage cluster
+ii  ceph-mgr                             12.2.1-pve3                    amd64        manager for the ceph distributed storage system
+ii  ceph-mon                             12.2.1-pve3                    amd64        monitor server for the ceph storage system
+ii  ceph-osd                             12.2.1-pve3                    amd64        OSD server for the ceph storage system
-ii  libcephfs1                           10.2.10-1~bpo80+1              amd64        Ceph distributed file system client library
-ii  libcephfs2                           12.2.1-1~bpo80+1               amd64        Ceph distributed file system client library
+ii  libcephfs1                           10.2.5-7.2                     amd64        Ceph distributed file system client library
+ii  libcephfs2                           12.2.1-pve3                    amd64        Ceph distributed file system client library
-ii  python-ceph                          12.2.1-1~bpo80+1               amd64        Meta-package for python libraries for the Ceph libraries
-ii  python-cephfs                        12.2.1-1~bpo80+1               amd64        Python 2 libraries for the Ceph libcephfs library
+ii  python-cephfs                        12.2.1-pve3                    amd64        Python 2 libraries for the Ceph libcephfs library



Upgraded node:
Code:
[root@kvm5e sources.list.d]# pveversion -v
proxmox-ve: 5.1-26 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-26
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.0-5~pve4
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
ceph: 12.2.1-1~bpo80+1

Newly installed node:
Code:
[root@kvm5f ~]# pveversion -v
proxmox-ve: 5.1-26 (running kernel: 4.13.4-1-pve)
pve-manager: 5.1-36 (running version: 5.1-36/131401db)
pve-kernel-4.13.4-1-pve: 4.13.4-26
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-15
qemu-server: 5.0-17
pve-firmware: 2.0-3
libpve-common-perl: 5.0-20
libpve-guest-common-perl: 2.0-13
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-16
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-2
pve-container: 2.0-17
pve-firewall: 3.0-3
pve-ha-manager: 2.0-3
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
ceph: 12.2.1-pve3
 
Herewith the diff between 'pveversion -v' on old and new nodes:
Code:
-pve-qemu-kvm: 2.9.1-2
+pve-qemu-kvm: 2.9.0-5~pve4

-ceph: 12.2.1-pve3
+ceph: 12.2.1-1~bpo80+1

The 'pve-qemu-kvm' difference appears to relate to the following output when running 'apt-get update; apt-get -y dist-upgrade':
Code:
The following packages have been kept back:
  pve-qemu-kvm
 
there are worse things to miss during upgrading and this one's easy enough to fix after the fact ;)
 
pve-qemu-kvm should definitely be listed, ceph only if the "ceph" package is installed (which it should be, on a ceph cluster..). please post the whole output