I'm planning a CEPH cluster which will go in production at some point but first will serve as a testing setup.
We need 125TB usable storage initially, with a cap of about 2PB.
The cluster will feed 10 intensive users initially, up to 100 later on. The loads are generally read heavy...
# ceph-volume simple scan
stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device
stderr: Bad argument "/var/lib/ceph/osd/ceph-2", expected an absolute path in /dev/ or /sys or a unit name: Invalid argument
How do you define the Ceph OSD Disk Partition Size?
It always creates with only 10 GB usable space.
Disk size = 3.9 TB
Partition size = 3.7 TB
Using *ceph-disk prepare* and *ceph-disk activate* (See below)
OSD created but only with 10 GB, not 3.7 TB
I have 2 PVE nodes and 5 servers as CEPH Storage, also building under PVE Servers.
So I have two cluster:
1 cluster with 2 PVE nodes, named PROXMOX01 and PROXMOX02.
* PROXMOX01 runs proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve) pve-manager: 5.3-11 (running version...
I have 3 nodes with 2 x 1TB HDD and 2 x 256G SSD's each.
I have the following configuration:
1 SSD is used as system drive (LVM partitioned so bout a third is used for the system partition and the rest is used in 2 partitions for the 2 x HDD's WALs.
The 2 x HDD are in a pool (the default...
I have a weird situtation tought you could help me
I have a cluster of three nodes running proxmox + ceph.
Ive installed the os (+ceph) on 2 x usb drive as zfs raid1, now I have high I/O wait on the CPU because the usbs are slow.
I added 2 x SAS 15K and I’m thinking if its possible to...
I've been working on a Ceph cluster for a few months now, and finally getting it to a point where we can put it into production. We're looking at possibly using an all flash storage system and I'd like to play around with using the inline compression feature with Bluestore.
Trying to set up brand new proxmox/ceph cluster. Got few problems:
1. Would it make sense to use SSD for wal/db? All osds are using HDDs, so I believe I'd benefit from using SSD for that.
2. Is it possible to use a partition instead of the whole drive to use for OSD while using a...
My apologies in advance for the length of this post!
During a new hardware install, our Ceph node/server is:
Dell PowerEdge R7415:
1x AMD EPYC 7251 8-Core Processor
HBA330 disk controller (LSI/Broadcom SAS3008, running FW 15.17.09.06 in IT mode)
4x Toshiba THNSF8200CCS 200GB...
We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using.
So is there any way we can improve with configuration changes...
given are 3 nodes:
each node 10 GB network
each node 8 enterprise spinners 4TB
each node 1 enterprise nvme 1TB
each node 64 GB RAM
each node 4 Core cpu -> 8 threads up to 3.2 GHz
pveperf of cpu:
CPU BOGOMIPS: 47999.28
each node latest proxmox of course...
I just want to create brand new proxmox cluster.
On some older cluster I used glusterfs, now I have some time and I try to compare glusterfs vs new ceph (PVE 5.2).
on my lab I have 3 VM (in nested env) with ssd storage.
iperf show between 6 to 11 gbps, latency is about 0.1ms
I make one...
How to specify DB device (not WAL device) for Bluestore OSD?
The Proxmox documentation of pveceph (pve.proxmox.com/pve-docs/chapter-pveceph.html) clearly shows how to specify a WAL device, but not a DB device.
pveceph createosd /dev/sdn -wal_dev /dev/sdb
Having used this method, within the...
I've had issues when I put in new journal disks and wanted to move existing disks from one journal disk to the new ones.
The issues where, I set the osd into Out mode, then Stopped the OSD, and destroyed it.
Recreating the OSD with the new DB device make the OSD never to show up!
This is a...
I'm in the middle of migrating my current osd's to Bluestore but the recovery speed is quite low (5600kb/s ~10 objects/s). Is there a way to increase the speed?
I currently have no virtual machines running on the cluster so performance doesn't matter at the moment. Only the recovery is running.
I am currently running a proxmox 5.0 beta server with ceph (luminous) storage.
I am trying to reduce the size of my ceph pools as I am running low on space.
Does ceph have some kind of option to use compression or deduplication to reduce the size of the pool on disk?