This is a really difficult task. Ceph reads and writes in parallel and acks when all OSDs have written its copies of that block. That means, if you write a single large file, every single block will be written into an object and assuming you have a 3/2 pool size, this block will get another 2...
Kurzer Tip: man kann das Partitionslayout auch mit fdisk ex- und wieder importieren. Ich hab die dazu nötigen Optionen im Hilfe Menü von fdisk mal mit <<<<<------- markiert:
root@vm-1:~# fdisk /dev/sda
Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to...
Yeah that does't quite fit. Our Ceph Cluster consists of 4 Nodes with 8HDDs each. We get an average throughput on sequential writes that saturates our 10G Ethernet Link, if you have enough threads. Single Thread Performance is much lower, ~138MB/s Read, ~96MB/s Write with a 16GB file.
This is really interresting. Could you keep me up to date about your performance findings etc. and which HW you used? (SSD Type, Controller Type)
Is this Data large chunks (seq. reads) or is it large amounts of random data(rand read)?
Im interrested in building a separate SSD Pool for enhancing...
Wie viel belegt ist kann man mit lvdisplay <poolname> |grep Allocated herausfinden. Wenn man das in ein Script jagt und die Prozente raus filtert, kann man via Script ab x% eine Syslog Nachricht senden, eine Mail auslösen, oder das Ganze grundsätzlich an ein Monitoring System übergeben wo man...
Just for clarification, I have a question in the context of starting/restarting a machine:
If I migrate a VM online, a new qemu Thread is started on the destination host. Does this count as a restart of the qemu process, the same way as if i had shut down the machine and then started it again?
Update:
I still cannot create OSDs with pveceph osd create. Our NVME Cache Disk is 375GB, but on creation of an OSD pveceph complains that the disk is too small:
root@vm-3:~# pveceph osd create /dev/sda -db_dev /dev/nvme0n1
create OSD on /dev/sda (bluestore)
creating block.db on '/dev/nvme0n1'...
Update:
Die Ursache des Problems war ein mismatch zwischen dem von zabbix_sender übertragenen Hostnamen (VM-2) und dem im Zabbix UI eingetragenen Hostnamen (vm-2). Da der Kram case-sensitive ist, hat der Server die Daten abgelehnt.
Das Zabbix Modul gibt dazu leider keine Auskunft und der Zabbix...
Thanks very much for clearing this up.
I had a missunderstanding of DB and WAL devices, so since our cluster is a bit below our expectation regarding performance, I will rebuild all OSDs to use our NVME SSD for DB too, one after one.
Thx, this can be considered closed...
So, just for clarification:
Right now i cannot use pveceph createosd because it does not accept the partition I give as an argument, since it expects an entire disk. This disk is completely used by the partitions I prepared manually beforehand OSD creation.
If I use ceph-volume lvm create with...
We are on PVE 6 with Ceph Nautilus (14.2.4)
Well somehow it wasn't clear to me, that -db_dev moves DB+WAL to the device. I thought it just moves DB, while -wal_dev moves just the WAL. The ceph-volume tool complains if you want to use the same dev for WAL and DB.
In my question I was really...
Thank you, this helps.
Will it be possible to create new OSDs with pveceph createosd or should I stick lvm method?
Maybe its just a wrong way of doing so, creating a separate partition for each WAL? Im not sure but, can I put multiple WALs onto the same single device? Like:
pveceph createosd...
I took a look around what this lvm based OSD is all about and came to some problematic things:
Imagine a softly defective HDD, with occasional read errors and reallocated sector counts in a somewhat big server with about 32 drives.
These soft failures are not really recognised by the HBA...
I issued the following command:
root@vm-2:~# ceph-volume lvm create --bluestore --data /dev/sde --block.wal /dev/nvme0n1p5
And it succeded... No Errormessages. OSD.15 has been successfully created.
sde...
We have a 6 Node Cluster, of which are 4 Ceph Nodes, 8HDDs each Node and one Enterprise NVME SSD per Node. In the last few days serveral HDDs died and have to be replaced.
Back when I setup the ceph storage, I created a partition on the ssd for every osd, to serve as WAL device.
When I try to...
So, well, I have to dig up this thread because I stumbled upon the "device path" thing...
We have a 6 Node Cluster, of which are 4 Ceph Nodes, 8HDDs each Node and one Enterprise NVME SSD per Node. When I setup the ceph storage, I created a partition on the ssd for every osd, to serve as WAL...
Den Debug hab ich schonmal erhöht. Da steht dann das was weiter oben zu sehen ist.
Irgendwie bekomme ich es aber jetzt nicht mehr hin, den debug Level zu ändern:root@vm-2:~# ceph tell mgr.vm-2 config set debug_mgr 20/20
no valid command found; 2 closest matches:
config show <who> {<key>}
config...
Unter ID gibst du einen Namen für den Storage ein, der in der Proxmox Web GUI erscheint. Unter Server gibst du den Namen oder die IP Adresse deines QNap NAS an. Unter Export solltest du dann via Dropdown das Verzeichnis auf der Qnap auswählen können.
Unter Content wählst du dann "VZDump Backup...
Meine Vermutung ist auch, das es irgendwas mit den Berechtigungen zu tun hat.
Die Berechtigung für den zabbix_sender sehen so aus:
root@vm-2:~# ls -ahl /usr/bin/zabbix_sender
-rwxr-xr-x 1 root root 205K Feb 6 2019 /usr/bin/zabbix_sender
Ausgeführt werden darf er also von jedem User auf dem...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.