Search results

  1. F

    Set ceph cluster to noout if host shutdown in correct way?

    Hi, I have a 3 Node Ceph cluster all of them stand in different racks. So if one of those racks is cut off from electricity, UPS should gracefully shutdown the node. This is so far clear. But is it possible to set noout to the whole cluster before? Only in case UPS asks to shutdown.
  2. F

    Enable framebuffer inside LXC container?

    Hi, I need for my remote control (screenconnect) access to my lxc container. Screenconnect supports debian 9 via agent. Sadly, screenconnect doesn't bring up a picture because I need to enable framebuffer inside LXC. How can I achieve this without installing a desktop environment?
  3. F

    VZDump slow on ceph images, RBD export fast

    root@ceph:~# pveversion -v proxmox-ve: 4.4-96 (running kernel: 4.4.83-1-pve) pve-manager: 4.4-18 (running version: 4.4-18/ef2610e8) pve-kernel-4.4.35-1-pve: 4.4.35-77 pve-kernel-4.4.67-1-pve: 4.4.67-92 pve-kernel-4.4.83-1-pve: 4.4.83-96 lvm2: 2.02.116-pve3 corosync-pve: 2.4.2-2~pve4+1 libqb0...
  4. F

    VZDump slow on ceph images, RBD export fast

    Yesterday I updated all nodes to the latest 4.4 and here is my feedback: Kernel Version: Linux 4.4.83-1-pve #1 SMP PVE 4.4.83-96 (Tue, 19 Sep 2017 10:30:12 +0200) PVE Manager Version: pve-manager/4.4-18/ef2610e8 Ceph Version: 10.2.10 Through the night, backup was running and the result is...
  5. F

    Ceph pool Usage in Proxmox Gui anders als "ceph df"?

    Linux 4.4.67-1-pve #1 SMP PVE 4.4.67-92 Ceph 10.2.9
  6. F

    Ceph pool Usage in Proxmox Gui anders als "ceph df"?

    Hallo Leute, wieso ist die Ausgabe im Proxmox Web Gui unter Ceph -> Pools eine andere als wenn man am Host direkt "ceph df" aufruft? Im Gui stehen beim zB ceph-vm pool: Used: 40% von Total 1,31TB Über die Shell bei "ceph df" steht bei ceph-vm: Used 61% von Total 846GB ceph-vm ist ein 3/2 mit...
  7. F

    VZDump slow on ceph images, RBD export fast

    @ctcknows did you try the update tom suggested? @tom what is the latest 4.4 version number?
  8. F

    [SOLVED] after add new node -> error: no such node

    Now I restarted one of the other nodes and it found after reboot the new node. So I am going to reboot all nodes one by one and its fine.
  9. F

    [SOLVED] after add new node -> error: no such node

    for me its new too. I never had problems in adding a node to a cluster.
  10. F

    [SOLVED] after add new node -> error: no such node

    exactly, lets say pve03 has the ip 192.168.1.3. So I executed pvecm add 192.168.1.3 and only on pve03 I can see all 5 nodes. Output cat /etc/pve.members { "nodename": "pve03", "version": 13, "cluster": { "name": "mycluster", "version": 5, "nodes": 5, "quorate": 1 }, "nodelist": {...
  11. F

    [SOLVED] after add new node -> error: no such node

    Hi, I am running a 4 node cluster with pve 4.4, cluster worked fine so far. Now I wanted to add a 5. node, but after adding the node only from 1 node (that one with the ip from the command: pvecm add EXISTING-IP). The other 3 nodes can't see the 5. one... /etc/pve/.members just show the old 4...
  12. F

    Ceph SAS mit SSD Journal -> Performancegewinn durch zusätzliche SSD?

    Nun der Ceph Cluster besteht aus 3x HP SE326M1, pro Node sind jeweils 3 Journal SSDs. Es sind Samsung SM863a und Kingston V300 (die mit "fio" gemessen, den Samsung ebenbürtig sind). I/O delay auf den Nodes liegt immer so zwischen 1-3% PVEPERF: CPU BOGOMIPS: 57599.28 REGEX/SECOND...
  13. F

    Ceph SAS mit SSD Journal -> Performancegewinn durch zusätzliche SSD?

    Sorry hab mich falsch ausgedrückt. Die zusätzliche SSD wird nicht an die selbe Journal SSD angebunden sondern hängt direkt mit sich selbst als Journal im Pool. Möchte eigentlich keinen extra SSD Pool haben, sondern einfach den bestehenden mit SSDs pimpen ;-)
  14. F

    Ceph SAS mit SSD Journal -> Performancegewinn durch zusätzliche SSD?

    Hi, habe einen 3er Ceph Cluster mit jeweils 8 SAS Disks und zwei SSDs fürs Journal (also pro SSD 4 OSDs). Schreibleistung liegt so bei 200-220 MB/s. Ich habe noch freie Slots im Server, würde es etwas bringen wenn ich jetzt noch zusätzliche SSDs einbauen die als OSDs arbeiten? Würde es die...
  15. F

    Usable space on Ceph Storage

    Thanks Fabian for that great explanation.
  16. F

    VZDump slow on ceph images, RBD export fast

    After adding a ZIL and a L2ARC to the NFS, I can say that it doesn't bring any advantages for backup with vzdump. In the end it took the same amount of time to save all my VMs and CTs.
  17. F

    Usable space on Ceph Storage

    Well I set size = 3 and min = 2, I have 2 pools called ceph-lxc, ceph-vm, they have been configured exactly like in the Proxmox video described. Output: cluster c4d0e591-a919-4df0-8627-d2fda956f7ff health HEALTH_OK monmap e3: 3 mons at...
  18. F

    VZDump slow on ceph images, RBD export fast

    FreeNAS 11 with 10x 2TB SATA Disk connected via 10Gbe, Basic NFS setup, no tunes, no hacks. Today I added to my FreeNAS ZFS Pool a SSD ZIL and a SSD L2ARC, lets see if it get better. To be honest, the test VM is showed above is quit good compressible: As you can see the VM disk has 32G, the...
  19. F

    VZDump slow on ceph images, RBD export fast

    I made an experiment on my infrastructure to see how backup performance differs and somehow its sad to see how much would be possible... Always the same VM (just different IDs): Backup VM on ceph to NFS: INFO: include disk 'virtio0' 'ceph-vm:vm-103-disk-1' 32G INFO: creating archive...
  20. F

    [SOLVED] Add node to cluster -> can't access ceph-vm

    I just risked it and executed "pveceph install --version jewel" on the 4th node. Now I can access the ceph-vm pool. Problem solved