Search results

  1. W

    [SOLVED] misplaced objects after removing OSD

    Hi! We have 3 identical servers running promox+ceph with 2 HDDs per server as OSDs: - OS: debian Buster - proxmox version 6.4-1 - ceph version 14.2.22-pve1 (nautilus) One OSD went down so we decided to remove it following the ceph documentation here. Now we have 5 OSD left: $ sudo ceph osd...
  2. W

    [SOLVED] ceph storage not available to a node

    hi! We have a 4 nodes proxmox 6 cluster. 3 nodes are proxmox 6 with ceph luminus (stable) and 1 additionnal node with just proxmox 6, no ceph. The thing is the ceph storage used to be availabe to that 4th node, but it suddenly became "status unknown" on the GUI while remaining "available" to the...
  3. W

    restore LXC with HA into local storage

    bonjour, I am aware of the fact that local storages do not support containers/VMs with HA. But I have to restore some LXCs backups (in .tar.lzo format) in local storage from time to time and I always fail (local storages are not meant for HA enabled containers/VMs ) Is there a way to disable HA...
  4. W

    [SOLVED] LXC container inaccessible

    Hi Guys, One of our LXC container went inaccessible: - ssh time out - noVNC console and Webinteface console goes black screen with a cursor blink - sudo pct enter <CTID> gives black screen as well But It is still possible to ping the container. The thing is we'va had had this issue before (with...
  5. W

    proxmox 6 and ceph luminous

    Hi guys, A little bit late but we are planning to upgrade our proxmox cluster from pve5 to pve6. Giving the official upgrade from 5.x to 6.x and Ceph luminous to nautilus docs. I have a question: will it be okay if I leave the cluster with ceph luminous after the upgrade to pve6, at least for a...
  6. W

    [SOLVED] adding back previously removed osd goes wrong

    Hi, For some reason, we removed an osd and decided to add it back under the same id (removed osd.6 then added it back under osd.6) We followed the ceph documentation: ceph documentation The problem is, the osd is not working anymore when we added it back 1/ the osd is marked as down (ceph osd...
  7. W

    [SOLVED] proxmox 5.13 unexpected reboot

    Hello, My configuration consist of three proxmox identical nodes with the following: - proxmox pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-25-pve) on debian stretch - ceph version 12.2.12 luminous (stable) And a 6TB NFS storage connected to cluster with 1Gb ethernet bond (active-backup...
  8. W

    [SOLVED] ceph startup script not working

    Hi! My configuration (before upgrade to proxmox 5 on debian stretch): - 3 proxmox nodes running Debian jessie - proxmox installed on top of Debian jessie - 2 hard drives per nodes as OSDs = total of 6 OSDs Today we upgraded our "proxmox 4 + ceph hammer" to "proxmox 5 + ceph luminous" following...
  9. W

    ceph upgrade

    Hi! I have proxmox 4 on three nodes and ceph hammer on each: I want to upgrade ceph from hammer to jewel and then from jewel to hammer. Since the upgrade is done node by node, will there be a risk during the process while some nodes will run ceph hammer and the others ceph jewel (those being...
  10. W

    upgrading from 4.4-24 with ceph to 5.xx

    Hi, I am aware of this: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 - We have three (3) identical nodes: 256Gb of RAM, 4Tb of HD, ... same on each node - Each node is running proxmox 4.4-24 with CEPH enabled - We do not have any shared storage, all VMs are on nodes' hard drives Could...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!