Proxmox 6 upgrade issues

Irek Zayniev

New Member
May 29, 2018
19
1
3
45
Hello!
Please help to get Ceph after upgrade back.
I did upgrade by the manual without some issues, just there is no ceph-volume utility.
and now I have

Code:
pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
ceph: 14.2.1-pve2
ceph-fuse: 14.2.1-pve2
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-4
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1

Ceph config is
Code:
[global]
      auth client required = cephx      
      auth cluster required = cephx
      auth service required = cephx
      cluster network = 10.200.201.0/22
      fsid = ede0d6ae-81ec-4137-a918-5daf79ae0ff2
      mon allow pool delete = true      
      osd journal size = 5120
      osd pool default min size = 2
      osd pool default size = 3
      public network = 10.200.201.0/22
      mon_host = 10.200.201.73 10.200.201.74 10.200.201.76
 [mon.galaxy06-rubby121]
      host = galaxy06-rubby121
      mon addr = 10.200.201.76:6789
 [mon.galaxy03-rubby702]
      host = galaxy03-rubby702
      mon addr = 10.200.201.73:6789
 [mon.galaxy04-rubby802]
      host = galaxy04-rubby802
      mon addr = 10.200.201.74:6789
 [client]
     keyring = /etc/pve/priv/$cluster.$name.keyring

ceph -s
Code:
ceph -s
  cluster:
    id:     ede0d6ae-81ec-4137-a918-5daf79ae0ff2
    health: HEALTH_WARN
            noout flag(s) set
            6 osds down
            6 hosts (6 osds) down
            Reduced data availability: 256 pgs inactive

  services:
    mon: 3 daemons, quorum galaxy03-rubby702,galaxy04-rubby802,galaxy06-rubby121 (age 55m)
    mgr: galaxy03-rubby702(active, since 54m), standbys: galaxy04-rubby802, galaxy06-rubby121
    osd: 9 osds: 2 up, 8 in
        flags noout
  data:
    pools:   1 pools, 256 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
            256 unknown

journalctl --unit=ceph-osd@0.service -n 1000 --no-pager
Code:
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: Starting Ceph object storage daemon osd.0...
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: Started Ceph object storage daemon osd.0.
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212181]: 2019-07-20 21:32:54.453 7f38678a9f80 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212181]: 2019-07-20 21:32:54.453 7f38678a9f80 -1 AuthRegistry(0x563650c08140) no keyring found at /var/lib/ceph/osd/ceph-0/keyring, disabling cephx
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212181]: 2019-07-20 21:32:54.453 7f38678a9f80 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212181]: 2019-07-20 21:32:54.453 7f38678a9f80 -1 AuthRegistry(0x7ffd9f663788) no keyring found at /var/lib/ceph/osd/ceph-0/keyring, disabling cephx
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212181]: failed to fetch mon config (--no-mon-config to skip)
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Main process exited, code=exited, status=1/FAILURE
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Failed with result 'exit-code'.
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Service RestartSec=100ms expired, scheduling restart.
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Scheduled restart job, restart counter is at 1.
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: Stopped Ceph object storage daemon osd.0.
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: Starting Ceph object storage daemon osd.0...
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: Started Ceph object storage daemon osd.0.
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212197]: 2019-07-20 21:32:54.897 7fe7bfec9f80 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212197]: 2019-07-20 21:32:54.897 7fe7bfec9f80 -1 AuthRegistry(0x55fbe5f44140) no keyring found at /var/lib/ceph/osd/ceph-0/keyring, disabling cephx
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212197]: 2019-07-20 21:32:54.897 7fe7bfec9f80 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212197]: 2019-07-20 21:32:54.897 7fe7bfec9f80 -1 AuthRegistry(0x7ffd314af658) no keyring found at /var/lib/ceph/osd/ceph-0/keyring, disabling cephx
Jul 20 21:32:54 galaxy01-rubby102 ceph-osd[212197]: failed to fetch mon config (--no-mon-config to skip)
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Main process exited, code=exited, status=1/FAILURE
Jul 20 21:32:54 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Failed with result 'exit-code'.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Service RestartSec=100ms expired, scheduling restart.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Scheduled restart job, restart counter is at 2.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: Stopped Ceph object storage daemon osd.0.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: Starting Ceph object storage daemon osd.0...
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: Started Ceph object storage daemon osd.0.
Jul 20 21:32:55 galaxy01-rubby102 ceph-osd[212212]: 2019-07-20 21:32:55.145 7fd022bcff80 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
Jul 20 21:32:55 galaxy01-rubby102 ceph-osd[212212]: 2019-07-20 21:32:55.145 7fd022bcff80 -1 AuthRegistry(0x555f1d4b2140) no keyring found at /var/lib/ceph/osd/ceph-0/keyring, disabling cephx
Jul 20 21:32:55 galaxy01-rubby102 ceph-osd[212212]: 2019-07-20 21:32:55.145 7fd022bcff80 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
Jul 20 21:32:55 galaxy01-rubby102 ceph-osd[212212]: 2019-07-20 21:32:55.145 7fd022bcff80 -1 AuthRegistry(0x7ffea5985f78) no keyring found at /var/lib/ceph/osd/ceph-0/keyring, disabling cephx
Jul 20 21:32:55 galaxy01-rubby102 ceph-osd[212212]: failed to fetch mon config (--no-mon-config to skip)
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Main process exited, code=exited, status=1/FAILURE
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Failed with result 'exit-code'.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Service RestartSec=100ms expired, scheduling restart.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Scheduled restart job, restart counter is at 3.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: Stopped Ceph object storage daemon osd.0.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Start request repeated too quickly.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Failed with result 'exit-code'.
Jul 20 21:32:55 galaxy01-rubby102 systemd[1]: Failed to start Ceph object storage daemon osd.0.
Jul 20 21:35:49 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Start request repeated too quickly.
Jul 20 21:35:49 galaxy01-rubby102 systemd[1]: ceph-osd@0.service: Failed with result 'exit-code'.
Jul 20 21:35:49 galaxy01-rubby102 systemd[1]: Failed to start Ceph object storage daemon osd.0.
 
Fixed throw fixing ceph-volume utility path and manually scan and activation
Code:
ceph-volume simple scan /dev/sdb1
ceph-volume simple activate 0 e29c3972-58a5-4934-940f-5419b95ec36e
 
  • Like
Reactions: ITWarrior

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!