Search results

  1. Ingo S

    [SOLVED] .img-Datei in VM einbinden

    schau mal nach ob die Boot Order korrekt ist. VM auswählen -> im mittleren Menü "Options" -> Boot Order Da sollte der virtuelle Anschluss der Platte drin stehen. Z.B. ide, scsi0 o.ä.
  2. Ingo S

    Proxmox mit NAS verbinden/aufrüsten

    Wenn du Daten im Netz bereitstellen willst, dann ist FreeNAS eine gute Wahl. Kann man auch als virtuelle Maschine in Proxmox laufen lassen, ist für deinen Anwendungsfall aber nicht so interessant. Ich würde das tatsächlich so machen wie einige schon vorgeschlagen haben: Für dein NAS brauchst du...
  3. Ingo S

    Ceph - question before first setup, one pool or two pools

    Hence my hint to AMD, since even on their 64 Core (128Thread) flagship, there are only two NUMA Nodes. I Suppose on CPUs with lower core count, this might be even only 1 NUMA Node for eg. 12 Core Systems. But the latter is just my guessing and to be checked beforehand.
  4. Ingo S

    Ceph - question before first setup, one pool or two pools

    Wow, all of this fits into a 1U Server? This setup sounds reasonable. What Controller will you be using for the SSDs? Are you going for an Intel or AMD based system? Just asking because AMD made a big leap in performance and seems very good at performance per buck.
  5. Ingo S

    Ceph - question before first setup, one pool or two pools

    If the OSD is a HDD, you should place DB+WAL on a SSD, since writing to the cluster produces lots of IO to the DB. If the DB is on the HDD OSD, overall Performance will be much less, especially in the case of a recovery, e.g. in case of a HDD failure etc. On our cluster we have one 375GB SSD...
  6. Ingo S

    Ceph - question before first setup, one pool or two pools

    For installing the PVE OS it will be sufficient, if there is enough space in the case, to just put another small SSD in there. We use 32GB Intel Optane SSDs for the OS with an NVME-> PCIe adapter. If your Servers have an onboard M.2 Slot for such a drive, you could use that instead. @Alwin In...
  7. Ingo S

    Ceph - question before first setup, one pool or two pools

    But please be aware that only overall throughput will increase, with the amount of users who access data. Every user will still experience only a data rate that is about equal to single thread performance. I only wanted to make this very clear. I don't know of an option that allows you to...
  8. Ingo S

    Ceph - question before first setup, one pool or two pools

    This is a really difficult task. Ceph reads and writes in parallel and acks when all OSDs have written its copies of that block. That means, if you write a single large file, every single block will be written into an object and assuming you have a 3/2 pool size, this block will get another 2...
  9. Ingo S

    SSD mit proxmox clonen, möglich?

    Kurzer Tip: man kann das Partitionslayout auch mit fdisk ex- und wieder importieren. Ich hab die dazu nötigen Optionen im Hilfe Menü von fdisk mal mit <<<<<------- markiert: root@vm-1:~# fdisk /dev/sda Welcome to fdisk (util-linux 2.33.1). Changes will remain in memory only, until you decide to...
  10. Ingo S

    Ceph - question before first setup, one pool or two pools

    Yeah that does't quite fit. Our Ceph Cluster consists of 4 Nodes with 8HDDs each. We get an average throughput on sequential writes that saturates our 10G Ethernet Link, if you have enough threads. Single Thread Performance is much lower, ~138MB/s Read, ~96MB/s Write with a 16GB file.
  11. Ingo S

    Ceph - question before first setup, one pool or two pools

    This is really interresting. Could you keep me up to date about your performance findings etc. and which HW you used? (SSD Type, Controller Type) Is this Data large chunks (seq. reads) or is it large amounts of random data(rand read)? Im interrested in building a separate SSD Pool for enhancing...
  12. Ingo S

    [SOLVED] WARNING: You have not turned on protection against thin pools running out of space

    Wie viel belegt ist kann man mit lvdisplay <poolname> |grep Allocated herausfinden. Wenn man das in ein Script jagt und die Prozente raus filtert, kann man via Script ab x% eine Syslog Nachricht senden, eine Mail auslösen, oder das Ganze grundsätzlich an ein Monitoring System übergeben wo man...
  13. Ingo S

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    Just a quick question: How can we remove the trace pattern config?
  14. Ingo S

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    Just for clarification, I have a question in the context of starting/restarting a machine: If I migrate a VM online, a new qemu Thread is started on the destination host. Does this count as a restart of the qemu process, the same way as if i had shut down the machine and then started it again?
  15. Ingo S

    PVE6 pveceph create osd: unable to get device info

    Update: I still cannot create OSDs with pveceph osd create. Our NVME Cache Disk is 375GB, but on creation of an OSD pveceph complains that the disk is too small: root@vm-3:~# pveceph osd create /dev/sda -db_dev /dev/nvme0n1 create OSD on /dev/sda (bluestore) creating block.db on '/dev/nvme0n1'...
  16. Ingo S

    [SOLVED] Ceph Module Zabbix "failed to send data..."

    Update: Die Ursache des Problems war ein mismatch zwischen dem von zabbix_sender übertragenen Hostnamen (VM-2) und dem im Zabbix UI eingetragenen Hostnamen (vm-2). Da der Kram case-sensitive ist, hat der Server die Daten abgelehnt. Das Zabbix Modul gibt dazu leider keine Auskunft und der Zabbix...
  17. Ingo S

    PVE6 pveceph create osd: unable to get device info

    Thanks very much for clearing this up. I had a missunderstanding of DB and WAL devices, so since our cluster is a bit below our expectation regarding performance, I will rebuild all OSDs to use our NVME SSD for DB too, one after one. Thx, this can be considered closed...
  18. Ingo S

    PVE6 pveceph create osd: unable to get device info

    So, just for clarification: Right now i cannot use pveceph createosd because it does not accept the partition I give as an argument, since it expects an entire disk. This disk is completely used by the partitions I prepared manually beforehand OSD creation. If I use ceph-volume lvm create with...
  19. Ingo S

    PVE6 pveceph create osd: unable to get device info

    We are on PVE 6 with Ceph Nautilus (14.2.4) Well somehow it wasn't clear to me, that -db_dev moves DB+WAL to the device. I thought it just moves DB, while -wal_dev moves just the WAL. The ceph-volume tool complains if you want to use the same dev for WAL and DB. In my question I was really...
  20. Ingo S

    PVE6 pveceph create osd: unable to get device info

    Thank you, this helps. Will it be possible to create new OSDs with pveceph createosd or should I stick lvm method? Maybe its just a wrong way of doing so, creating a separate partition for each WAL? Im not sure but, can I put multiple WALs onto the same single device? Like: pveceph createosd...