Recent content by Xislmo

  1. X

    Upgraded to VE 6.3 ceph manager not starting

    Thanks. Upgraded to octopus and had to turn "pg_autoscale_mode" on. All three Ceph cluster manager daemons crashed after doing this but a simple "sudo systemctl stop ceph-mgr*; sudo systemctl start ceph-mgr*" fixed that and now the "1 ceph pools have too many placement groups" health error...
  2. X

    Upgraded to VE 6.3 ceph manager not starting

    Same problem here, upgraded to 6.3 on friday. I only have a production cluster at hand so I'd be really thankful if somone from proxmox could give me some advice on how to proceed. Is it considered "save" to upgrade to Octopus? I disabled the dashboard now to get rid of monitoring warnings.
  3. X

    kernel: libceph: osd9 10.10.23.12:6824 socket error on write

    Hello Thomas, thanks for the quick reply! No negative impact to be seen till now. I'll have a look if I can correlate it to high traffic or so.
  4. X

    kernel: libceph: osd9 10.10.23.12:6824 socket error on write

    Hello, since a few weeks a get the following log entry once or twice a week: kernel: libceph: osd9 10.10.23.12:6824 socket error on write Of the 40 OSD#s its only osd9 which is having problems. I cannot find any other relevant ceph logs. The ceph dashboard says everything is ok. I'm running...
  5. X

    [SOLVED] Subscription system not updating

    thanks @tom. That I understood. But wat are the implications of "invalid: subscription info too old"? Will the stable repos not talk to my proxmox instances anymore from a certain point on? Or will they serve updates till the license expires regardless of "invalid: subscription info too old"?
  6. X

    [SOLVED] Subscription system not updating

    Hello, I have a similar setup (hosts no internetconnection, deb packages via apt-cacher-ng) and ran into the same problems/questions as upnort. I also think, that it'd be great to expand the documentation for people migrating to the stable repos. Further on I've got the problem, that after a...