Search results

  1. F

    [SOLVED] Ceph: neue Nodes, wie neue Pools hinzufügen?

    Hallo Leute und einen schönen Zwickeltag! Ich betreibe 3 Ceph Nodes und möchte diesen demnächst um 2 weitere Nodes erweitern. Es gibt einen 3/2 Pool (Config wie im HowTo), diesen kann ich ja nicht ändern, also müsste ich einen neuen Pool 5/3 hinzufügen oder? Eine grundsätzliche Frage hätte ich...
  2. F

    Proxmox VE Ceph Benchmark 2018/02

    second round of my ssd benchmark testing: SSD Controller BW IOPS HP SSDF S700 240GB 5171 KB/s 1292 Hynix Canvas SL300 240GB 5166 KB/s 1291 Intel DC S3520 240GB Intel 83085 KB/s 20771 Plextor PX-256S3C 256GB Silicon Motion SM2254 / TLC 5045 KB/s 1261...
  3. F

    [SOLVED] Ceph und corosync über das selbe Netzwerk

    Hallo, ich will keinen extra Thread für meine Frage aufmachen die ähnlich ist: zZ ist meine Cluster Kommunikation in einem durch VLAN getrennten Gigabit Netzwerk. Ceph bzw. Storage (für Backups) ist in einem komplett physisch getrennten 10G Netzwerk. Da ich noch Platz auf den 10G Switches habe...
  4. F

    [SOLVED] suggestions p420i raid controller with CEPH

    @silvered.dragon You set your controller to HBA mode?
  5. F

    Ceph slow mit schnellen SSDs und 10G

    Hi, Setup: 3 node (HP SE326M1) Ceph Cluster, mit 10G Storage Netzwerk und 20 OSDs. Hab jetzt alle SAS Disks gegen SSDs getauscht, erhalte beim rados Benchmark allerdings immer noch nur 240-250 MB/s. Controller Cache habe ich deaktiviert, hat keine Änderungen gebracht. Der Controller arbeitet...
  6. F

    Expand boot zfs raid1 with a third disk?

    Hi, my proxmox hosts have zfs raid1 with 2 disks as root file system. Now I got few more disks and want to expand that raid1 with an additional disk. Or if its unrisky even migrate it to zraid2. Is that possible or do I have to reinstall proxmox ?
  7. F

    HA: VM startet on non group node. mistake?

    Hi, I set up an HA group with 2 nodes with a lot of memory. So I created a HA group with both nodes, and add 2 VMs. No I had to power off both nodes and the VMs have been started on the other nodes. Is this correct? I thought the HA groups specify in between which hosts a VM can "jump"...
  8. F

    Ceph SSD OSD marked as HDD?

    Hi, I have a 3 Node Ceph cluster, with serveral SAS OSDs and SSDs as their journal device. Now I add my first SSD OSD as datastore (with itself as a jorunal). Ceph set the Class of the SSD as "hdd" (with OSD type bluestore). Why? Can I change it? Should I change it?
  9. F

    Replication from ceph to local storage

    hmm I want to stay with built-in features as much as possible. They are tested and I trust them :-) From the Proxmox perspective: Do you think this is a feature which will find its way to a release?
  10. F

    Replication from ceph to local storage

    Move yes, but I am talking about automated replication.
  11. F

    Ceph pool used in % ?

    Thanks for your link: Correct me if I am wrong: In easy word: The size shown an pools is my used data, and the % is the part of used data beside metadata and other needed stuff at global usage (divided with replicated nodes)?
  12. F

    Replication from ceph to local storage

    Hi, as I read its mandatory to use zfs to use replication. Is this limited due to technical issues or can it be possible in future releases that replication from ceph storages to a local storage are doable?
  13. F

    Ceph pool used in % ?

    So for example: There are 2 VMs, both disk size are together 100GB, but only 50% is used per VM. Their data lay on a 3 node ceph cluster, the performance panel would show 300GB of "all OSDs capacities". At ceph -> pools it would be shown: 50% and Total 50GB or would it display: 50% and Total 100GB
  14. F

    Ceph pool used in % ?

    Hi, at ceph -> pools: there is Used in % an Total. Total is as I understand the usage of all VM disks. Multiplied with the replication is the usage size value at the performance panel. But what does the Used in % mean?
  15. F

    VZDump slow on ceph images, RBD export fast

    Is there any new feedback from users who upgraded Proxmox and Ceph and could increase backup speed? I can tell for myself: Last weekend I upgraded to 5.1 and Ceph 12.2.4 and after 2 rounds of backups I can say backup speed didn't change, at least for me.
  16. F

    Upgrade Ceph and set tunables -> object_misplaced stuck

    Well no, but thank you! I deleted both pools, cluster was healthy immediately and than I restored all VMs from backup.
  17. F

    Upgrade Ceph and set tunables -> object_misplaced stuck

    Hi, I have a 3 node ceph cluster, which was updated yesterday to Proxmox 5.1 and ceph luminous. I followed strict the guides from the wiki, so in the end I had a healthy cluster. than I executed ceph osd crush tunables optimal It started rearranging objects, but with time it stuck with mon.0...
  18. F

    Proxmox VE Ceph Benchmark 2018/02

    Hi, because SSD is such an essential part and on another hand it should be cost efficient (well at least for me), I did some benchmarking on several consumer SSDs. All test have been made with fio on the same system, with disabled write cache. Maybe it can be useful for somebody else too...
  19. F

    4.4 upgrade to 5.1 with ceph: no luminous packages found

    oho, I would not expect something like this... more that I made something stupid... Anyway, you don't want to offer a repository for jessie too? :D
  20. F

    4.4 upgrade to 5.1 with ceph: no luminous packages found

    Hi, I wanted to follow the proxmox update guide: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 Because I use ceph I started with: https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous When I execute "apt-get update && apt-get dist-upgrade", there are no ceph updates or packages...