Hi,
yesterday one OSD went down and dropped out of my cluster, systemd stopped the service after it "crashed" 4 times. I tried restarting the OSD manually, but it continues to crash immediately, the OSD looks effectively dead.
Here's the first ceph crash info (the later ones look the same):
{...
Hello,
I need some guidance on the configuration for the performance of rbd performance. I've followed instructions and gone through the documentation multiple times but can't get the disk performance as high as I expect them to be.
My current setup is as follows:
3 similarly configured...
I reinstalled one of the three proxmox ceph nodes with new name, new ip.
I removed all lvm data, wiped fs of the old disks with:
dmsetup remove_all
wipefs -af /dev/sda
ceph-volume lvm zap /dev/sda
Now when I create osds via gui or cli they are always filestore and I dont get it, default should...
Hi All,
I’m setting up ceph cluster with 3x node pve 6.2. each node got following disks
7x 6TB 7200 Enterprise SAS HDD
2x 3TB Enterprise SAS SSD
2x 400GB Enterprise SATA SSD
This setup previously used for old ceph (with filestore) cluster where it configured to use 2x 400GB SATA SSD to...
Hello there, I recently upgraded from proxmox 5 to 6 as well as ceph luminous to nautilus. I wanted to go through and re-create the osds I have in my cluster. I ran into an issue with the second osd I wanted to convert (the first went fine). Here's what I get after I zap the disk:
pveceph...
Preface:
I have a hybrid Ceph environment using 16 SATA spinners and 2 Intel Optane NVMe PCIe cards (intended for DB and WAL). Because of enumeration issues on reboot, the NVMe cards can flip their /dev/{names}. This will cause a full cluster re balance if the /dev/{names} flip. The...
I have a hyperconverged proxmox/ceph cluster of 5 nodes running the latest proxmox/ceph (bluestore) with 6 480GB SSD's each (totalling 30 OSD's) and I'm starting to run slow on storage. Is it possible and would it be wise to replace some (if not all) SSD's with bigger ones? Or if I added a node...
When using bluestore osds, the backup data stops flowing, the task shows running, but no more data moves. I've swapped all the osds back to file and the backups work perfectly. Also backups stop if there is any osd using bluestore.
Syslog: only errors reported:
Sep 24 22:58:35 proxmox3...
I want to share following testing with you
4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD.
OSD: st6000nm0034
block.db & block.wal device: Samsung sm961 512GB
NIC: Mellanox Connectx3 VPI dual port 40 Gbps
Switch: Mellanox sx6036T
Network: IPoIB separated public network &...
We initially tried this with Ceph 12.2.4 and subsequently re-created the problem with 12.2.5.
Using 'lz4' compression on a Ceph Luminous erasure coded pool causes OSD processes to crash. Changing the compressor to snappy results in the OSD being stable, when the crashed OSD starts thereafter...
Hi everyone,
recently we installed proxmox with Ceph Luminous and Bluestore on our brand new cluster and we experiencing problem with slow reads inside VMs. We tried different settings in proxmox VM but the read speed is still the same - around 20-40 MB/s.
Here is our hardware configuration...
Hallo an die Community,
eine Frage zu Ceph und Proxmox 5.1:
Wie aktiviere ich die "compression" im Ceph Bluestore richtig?
Mit den Befehlen von Ceph z.B.:ceph osd pool set test compression_mode aggressive oder gibt es speziell für Proxmox eine Option mit pveceph createpool?
Hintergrund:
Ich...
I am running a PVE 5-Cluster with Ceph bluestore OSDs. They are only HDD OSDs and connected over 2x1GBit bonds. Don't get me wrong here, it isn't productive yet and I don't expect any fancy performance out of this setup.
I am highly impressed with the performance of it under Linux-KVMs and when...
Hi Folks,
shall i migrate from filestore to bluestore following this article ?
http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/
or wait for ceph 12.2.x ? currenty pve has 12.1.2 luminous rc ...
but how long to wait ? any release plans for 12.2 ?
regards
Hi Folks,
Whis performanche shoud i expect for this cluster ? are my settings ok ?
4 nodes:
system: Supermicro 2028U-TN24R4T+
2 port Mellanox connect x3pro 56Gbit
4 port intel 10GigE
memory: 768 GBytes
CPU DUAL Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
ceph: 28 osds
24 Intel Nvme 2000GB...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.