Hi!
We have 3 identical servers running promox+ceph with 2 HDDs per server as OSDs:
- OS: debian Buster
- proxmox version 6.4-1
- ceph version 14.2.22-pve1 (nautilus)
One OSD went down so we decided to remove it following the ceph documentation here.
Now we have 5 OSD left:
$ sudo ceph osd...
hi!
We have a 4 nodes proxmox 6 cluster. 3 nodes are proxmox 6 with ceph luminus (stable) and 1 additionnal node with just proxmox 6, no ceph.
The thing is the ceph storage used to be availabe to that 4th node, but it suddenly became "status unknown" on the GUI while remaining "available" to the...
bonjour,
I am aware of the fact that local storages do not support containers/VMs with HA. But I have to restore some LXCs backups (in .tar.lzo format) in local storage from time to time and I always fail (local storages are not meant for HA enabled containers/VMs )
Is there a way to disable HA...
Hi Guys,
One of our LXC container went inaccessible:
- ssh time out
- noVNC console and Webinteface console goes black screen with a cursor blink
- sudo pct enter <CTID> gives black screen as well
But It is still possible to ping the container.
The thing is we'va had had this issue before (with...
Hi guys,
A little bit late but we are planning to upgrade our proxmox cluster from pve5 to pve6.
Giving the official upgrade from 5.x to 6.x and Ceph luminous to nautilus docs. I have a question:
will it be okay if I leave the cluster with ceph luminous after the upgrade to pve6, at least for a...
Hi,
For some reason, we removed an osd and decided to add it back under the same id (removed osd.6 then added it back under osd.6)
We followed the ceph documentation: ceph documentation
The problem is, the osd is not working anymore when we added it back
1/ the osd is marked as down (ceph osd...
Hello,
My configuration consist of three proxmox identical nodes with the following:
- proxmox pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-25-pve) on debian stretch
- ceph version 12.2.12 luminous (stable)
And a 6TB NFS storage connected to cluster with 1Gb ethernet bond (active-backup...
Hi!
My configuration (before upgrade to proxmox 5 on debian stretch):
- 3 proxmox nodes running Debian jessie
- proxmox installed on top of Debian jessie
- 2 hard drives per nodes as OSDs = total of 6 OSDs
Today we upgraded our "proxmox 4 + ceph hammer" to "proxmox 5 + ceph luminous" following...
Hi!
I have proxmox 4 on three nodes and ceph hammer on each:
I want to upgrade ceph from hammer to jewel and then from jewel to hammer. Since the upgrade is done node by node, will there be a risk during the process while some nodes will run ceph hammer and the others ceph jewel (those being...
Hi,
I am aware of this: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
- We have three (3) identical nodes: 256Gb of RAM, 4Tb of HD, ... same on each node
- Each node is running proxmox 4.4-24 with CEPH enabled
- We do not have any shared storage, all VMs are on nodes' hard drives
Could...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.