Search results

  1. L

    [SOLVED] ProxMox 6 - CEPH - Backup - getrennter Pool?!?!

    Das ganze kann einfach über crushrules geregelt werden: # rules rule replicated_rule { id 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } rule ssd-only { id 1 type replicated min_size 1...
  2. L

    Optimierung d. Geschwindigkeit v. Backups

    Naja, ich würde schon erstmal die quellen uniform machen, dH überall HBA rein. IOPS auf Quellseite sollten echt nicht das problem sein. Evtl ist es dann schon gelöst. Auf Zielseite schreibst Du eh nur sequentiell, aber wenn Du hier noch ein bottleneck vermutest, kannst du das kannst ja einfach...
  3. L

    [SOLVED] CEPH resilience: self-heal flawed?

    Hi, i'm exeriencing a strange behaviour in pve@6.1-7/ceph@14.2.6: I have a Lab setup with 5 physical Nodes, each with two OSDs. This is the Ceph Config + Crushmap: Config: [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx...
  4. L

    Optimierung d. Geschwindigkeit v. Backups

    Benutzt du bei den 3 nodes mit Raidcontroller dann ein logical volume als single OSD? Das könnte dann etwas damit zu tun haben. Ansonsten würde ich hier in IOPS denken, weniger in MB/s. Was sagt denn die Latency im OSd Dashboard? Ich glaube auch nicht, daß ein spinning rust backup storage das...
  5. L

    Optimierung d. Geschwindigkeit v. Backups

    Eine USV hilft Dir auch nicht bei einem Netzteilproblem zB. PLP ist unbedingtes muss. So viel teurer sind die DC SSDs tatsächlich auch nicht, daß ich das riskieren würde. Was nicht so ganz klar ist - das Ceph läuft derzeit auf HDD's? Ich backuppe in zwei Strategien. Einmal "klassisch" mit...
  6. L

    Optimierung d. Geschwindigkeit v. Backups

    Consumer ssd für Server sind keine gute Idee. Wenigstens PLP sollten sie können, ansonsten sind die Tränen nach einem Stromausfall groß. Zudem habe ich nicht ganz verstanden warum die SSD im Backup Server das Backup schneller machen sollen wenn das ceph von dem die Daten kommen schon zu langsam...
  7. L

    Start Tor Browser- Sicherheit

    Legacy Software als security Feature? Interessanter Ansatz..
  8. L

    XCP-ng 8.0 and Proxmox VE 6.1

    You are right, the possibility to have a Cross-pool migration in XS/XCP is a unique selling point. However, I barely made use of it, since it turned out to be not super-robust. In Lab environments this worked most of the time, in Production, when guests were up for some month, and maybe live...
  9. L

    Backup storage option if solely used CEPH with Proxmox

    Check eve4barc again, i have buffed it with some new features.
  10. L

    Anfängerfragen Ceph

    Hier sollte man eventuell noch hinzufügen, daß die SSD's im Sinne der Crashresistenz im Falle einer Powerloss Events über PLP verfügen sollten, also einen kleinen energiepuffer in form eines Kondensators innerhalb der SSD, der dafür sorgt, daß inflight Data weggeschrieben werden kann. Das ist...
  11. L

    ZFS 0.8.0 Released

    .. there is no heavy load on them. Other Cluster aware Filesystems migrate stuff smoothly in background and don't bascially kill your whole IO.
  12. L

    ZFS 0.8.0 Released

    Mean this? already done.. root@newton:~# zfs get recordsize six NAME PROPERTY VALUE SOURCE six recordsize 128K default root@newton:~# zfs get recordsize six NAME PROPERTY VALUE SOURCE six recordsize 128K default
  13. L

    ZFS 0.8.0 Released

    This is how it looked like migrating from one pool to another. Basically rendering the whole machine hardly responsive.. six = zfs mirror without slog and unencrypted fatbox = zfs mirror with slog and encrypted
  14. L

    ZFS 0.8.0 Released

    There are no zvol's involved on the other machine, just a ZFS that makes some exports. But on the other hand it's much weaker in terms of CPU (6y old HP Microserver with AMD Turion cpu only). I know it can't be compared 1:1, but I am pretty sure that what I see is not normal. It can't be. Still...
  15. L

    ZFS 0.8.0 Released

    Yeah sure. I have other setups with some aged Debian + zfs 0.7x, that outperformance the crippled ZFS in Proxmox by far. I'll send stats in some mins since there's a live migration still ongoing..
  16. L

    ZFS 0.8.0 Released

    Two 10TB Drives as Mirror with a Optane as ZIL. But it doesn't matter if a zil is there or not. Really simple homelab setup.
  17. L

    ZFS 0.8.0 Released

    ZFS is still awefully slow with me. ( Trying to run some low load vm's with a few Containers inside from a encrypted ZFS kills my Host for minutes. I have even thrown a Intel Optane on it. There's something seriously flawed with ZFS on Proxmox 6.1-5/9bf061 (Ryzen 2700x/64gb ECC/WD-Red + Optane...
  18. L

    Is backup faster if VM is turned off ?

    Currently you won't get around using the CLI, if you want fast Backups of RDB Images. You might want to take a look at https://github.com/lephisto/cv4pve-barc/tree/master or the project is was forked from. It s work in progress, but provides me with some blazing fast RDB Image backups. I...
  19. L

    XCP-ng 8.0 and Proxmox VE 6.1

    Adding nodes in terms of additional Clusterhosts is more or less them same procedure. Bootstrap a new host, configure network, enter a Cluster join key (or root credentials on XCP). Sync is done with corosync on Proxmox and xapi on Xen. The Cloudinit support with qemu/kvm/proxmox seems more...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!