Hi,
I read this sentence on ceph hardware-recommendations : "Provision at least 10 Gb/s networking in your datacenter, both among Ceph hosts and between clients and your Ceph cluster"
This is my ceph configuration:
[global]
auth_client_required = cephx
auth_cluster_required = cephx...
It will help if it is GUI-based steps to create CephFS. I currently have configured Ceph as shared storage on all 3 nodes.
Also, Linux VMs are on local storage. I have to use CephFs to create a shared folder between 2 VMs on different nodes.
OK, so I’m fairly new to Proxmox and Ceph.
I need some advice on how to migrate to Ceph.
Current Layout:
SERVER01: (Same exact hardware as SERVER02)
Production - hosting 20-30 VM’s
Storage Hardware RAID
2 - SSD’s setup as RAID1
6 - SSD’s setup as RAID5
This storage is hosting the VM’s
Needs...
Storage noob here. I am building a new single node proxmox server on a 2U server that has 9 3.5in 4TB HDDs and 6 2.5in 800GB SSDs beyond what is used for Proxmox's boot image. The server will be running a mix of stateful container workloads, databases for stateless containers, and VMs for...
Hi,
I'm building a cluster with Proxmox mainly for running LXC CTs. I have 5 identical servers all connected through a 40Gbps switch. I also want to use the same servers for Ceph so they can share their local disks with each other.
In PVE Ceph wiki page it recommends 3 NICs for a high...
Please help me with your advice.
I need to implement fault tolerance at the datacenter level in the Proxmox VE hyperconverged cluster (pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.13-1-pve)) and Ceph Reef 18.2.1.
To test future changes, I created a virtual test bench in VirtualBox...
Having see the Ceph blog post about RocksDB performance today, I was keen to see what packages were available which include the fixes. I the bug ticket I see bug ticket I see comments about applying the fixes to Proxmox Ceph packages, but looking in git repo, I don't see the fixes in the commit...
Example-
****************
Cluster-
node 1-> has VM-1(on local storage)
node 2-> has VM-2(on local storage)
I am already using Ceph and HA. I want to use VM-1 and VM-2 "tmp" directory to be synced.
I was thinking of mounting some storage from the Ceph storage pool in VM-1 and VM-2 for...
I have (had) a Promox 8.1 cluster with 2 nodes and one qDevice as 3rd witness
/etc/ceph/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.227.101.33/24
fsid = something...
flow:
1 servers had reboot due to power maintenance,
2 (after the reboot) i noticed one server had bad clock sync - fixing the issue and another reboot solved it)
the
3. after time sync fixed cluster started to load and rebalance,
4 it hang at error state (data looks ok and everything stable and...
Hi,
We have several Proxmox hosts that have a Ceph storage and are connected to each other.
Technically Ceph works, but for one specific host there is always this error message on all of it OSDs (No matter on which host we use the Web UI):
OSD '29' does not exist on host 'SRV-Host' (500)
The...
I need to build a 4 node Ceph cluster and I need an effective capacity of 30TB.
Each node is configured like this:
Supermicro 2U Storage Server 24 x NVME, X11DPU, Dual 1600W
2 x Intel Xeon Gold 6240 18 Core 2.6Ghz Processor
768G Ram
2x 100G Network
6 x Samsung PM9A3 3.84TB PCIe 4.0 2.5...
Users had been complaining about laggy VM performace on our HDD-based Ceph pool. With many users VMs having many applications that are logging to disk, I see a lot of small writes with write IOPS > 5x read IOPS.
Reading up here and elsewhere, it seems that Write-back may yield better...
Hello community,
My first post here.
I have been searching on the subject and the posts I have seen on the forum and by search engines on the subject have not finished me clarify my doubt.
To put in context. I have 3 PVE Nodes in Cluster with CEPH as backend storage for the VMs.
In the ceph...
Hi All,
I am using Proxmox 8.1.4 without license subscription.
I have changed to no subscription repository.
But it is still not possible to install Ceph. TT
/etc/apt/sources.list
deb http://ftp.debian.org/debian bookworm main contrib
deb http://ftp.debian.org/debian bookworm-updates main...
Just ran into this in the lab, haven't gone digging in prod yet.
pve-manager/8.1.3/b46aac3b42da5d15 (running kernel: 6.2.16-20-pve)
Cluster is alive, working, zero issues, everything in GUI is happy, 100% alive -- however... the "ceph device" table appears to have NOT updated itself for a...
Hey guys,
in the latest Ceph blog a post was published on 13.02 regarding a CMAKE_BUILD_TYPE bug, which apparently affects Ceph Ubuntu packages. I know that we use Debian, but in a github pull request Proxmox is mentioned with a significant improvement in latency.
Can we assume that we are...
Hi, when I try to attach a new osd to my ceph cluster, I get an error regarding the link https://quay.io/v2/
I would like to know where this error comes from and why ?
And what is the real use of the quay.io/v2/, does ceph retrieve information on the remote server ?
Thanks in advance
Error...
Hi,
i have a cluster of 3 compute nodes and 3 storage nodes.
I wanted to upgrade to pve 7.4 and ceph quincy.
Followed the official documentation https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy
All went ok until i restarted the osds on one of the storage nodes.
some are upgraded and some...
We're succesfully using Ceph on Proxmox, and have started to attempt to use CephFS.
We are able to mount, and create a file, but can then not write to the file, it shows the below error:
root@<redacted>:/mnt/ceph# echo "test" > /mnt/ceph/testfile
-bash: echo: write error: Operation not permitted...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.