Bonjour,
We had proxmox version 7 installed as standalone (all VM in a host - no cluster) from official ISO and VMs disks were LVM.
After a reboot, the host refused to boot with the following error:
error: disk...
Hi!
We have 3 identical servers running promox+ceph with 2 HDDs per server as OSDs:
- OS: debian Buster
- proxmox version 6.4-1
- ceph version 14.2.22-pve1 (nautilus)
One OSD went down so we decided to remove it following the ceph documentation here.
Now we have 5 OSD left:
$ sudo ceph osd...
We did applied it on the three proxmox with ceph enabled, not the last one
We had to add the ceph repo and update+upgrade to fix it. Thank you a lot @aaron
moreover, the ceph storage (named "storage") is marked as inactive on the 4th node by pvesm status command:
root@srv-X:/etc/pve# sudo pvesm status
got timeout
Name Type Status Total Used Available %
backsrv1 dir disabled...
sorry, I made a typo, the 3 nodes have ceph nautilus
root@srv-Y:/home/xxx# ceph version
ceph version 14.2.20 (886a8c9442681274213d1c7e897b12624edf6c8a) nautilus (stable)
The 4th node is just an additionnal node we just purchased, it has no ceph installed, it only accesses ceph
hi!
We have a 4 nodes proxmox 6 cluster. 3 nodes are proxmox 6 with ceph luminus (stable) and 1 additionnal node with just proxmox 6, no ceph.
The thing is the ceph storage used to be availabe to that 4th node, but it suddenly became "status unknown" on the GUI while remaining "available" to the...
bonjour,
I am aware of the fact that local storages do not support containers/VMs with HA. But I have to restore some LXCs backups (in .tar.lzo format) in local storage from time to time and I always fail (local storages are not meant for HA enabled containers/VMs )
Is there a way to disable HA...
Hi,
We did the upgrade and yes, ceph was inaccessible until we finished to upgrade to proxmox 6. That is one minor issue we faced, but everything ended well.
Another thing we noticed is the fact that ceph systemd services does not work well (at least on our system) so we had to disable them...
Hi Guys,
One of our LXC container went inaccessible:
- ssh time out
- noVNC console and Webinteface console goes black screen with a cursor blink
- sudo pct enter <CTID> gives black screen as well
But It is still possible to ping the container.
The thing is we'va had had this issue before (with...
Hi guys,
A little bit late but we are planning to upgrade our proxmox cluster from pve5 to pve6.
Giving the official upgrade from 5.x to 6.x and Ceph luminous to nautilus docs. I have a question:
will it be okay if I leave the cluster with ceph luminous after the upgrade to pve6, at least for a...
Sorry guys for the late reply,
Throttling the speed down was the solution , now the cluster in fine
NEVER MIX COROSYNC TRAFFIC WITH ANYTHING ELSE
Cheers!
Hi,
For some reason, we removed an osd and decided to add it back under the same id (removed osd.6 then added it back under osd.6)
We followed the ceph documentation: ceph documentation
The problem is, the osd is not working anymore when we added it back
1/ the osd is marked as down (ceph osd...
Thank you Aaron,
We started from something rather small that is why we went for a few Gb ethernet ports but we are planning to upgrade everything soon :)
I will send feedback here next week.
Yes to both questions
Our servers network configurations are identical and look like the following:
- four 1GB ethernet interfaces
- port1 and port2 (round robin mode) are dedicated to ceph
- port3 and port4 (ative-backup mode) are dedicated to various vlans: LAN, internet, VPN, NFS storage and...
Hello,
My configuration consist of three proxmox identical nodes with the following:
- proxmox pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-25-pve) on debian stretch
- ceph version 12.2.12 luminous (stable)
And a 6TB NFS storage connected to cluster with 1Gb ethernet bond (active-backup...
We ended up using a 6TB NFS storage and move all VM storage there to leave the ceph unused and NO DOWNTIME
Everything went straight through, absolutely no issue
If it can help others:
Our NFS storage had 1Gb interface (125MB/s maximum theorical speed) was able to handle almost 50 VMs...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.