Recent content by UdoB

  1. UdoB

    Wie viel RAM für Proxmox ZFS RAID

    Ja, bei weitem. https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage : " ... will be set to 10 % of the installed physical memory," - also vielleicht etwas über 3 GiB.
  2. UdoB

    Migration Doubt

    https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Beta_Documentation : "...with the objective of providing a centralized overview of all your individual nodes and clusters. It also enables basic management like migrations of virtual guests without any cluster network requirements." :-)
  3. UdoB

    Can't access my server on my web browser

    Perhaps you can find some hints here: https://forum.proxmox.com/threads/fabu-no-network-connectivity-after-installation-or-after-switching-the-router-can-not-load-the-web-gui-in-a-browser.160091/
  4. UdoB

    [SOLVED] OpenTelemetry server

    Look at: Datacenter --> Metric Server --> Add (Button Drop/Down) --> Open Telemetry. Disclaimer: haven't used it yet...
  5. UdoB

    Different storage amount shown in GUI and df -h

    The 152 G is the virtual disk size, given to the virtual guest. The 58 GB is the space of one (or more, you didn't show us the output of df) filesystem(s). Often both are coupled and give similar size - but that is not a must. Examine the partition table from inside the guest and verify...
  6. UdoB

    Request for help regarding random crashes

    You should have installed version 9, not 8.x then! Try to find hints regarding the crashes in the Journal. If it happened during the previous boot you can look at the end of the relevant journal like this: journalctl -b -1 -p warning -e For a description of "-b" etc. consult man journalctl...
  7. UdoB

    zfs send over 10Gbe horribly slow

    Yes. Maybe. No. On the same machine as above: ~# dd if=/dev/random of=/dev/null bs=1M count=5000 5000+0 records in 5000+0 records out 5242880000 bytes (5.2 GB, 4.9 GiB) copied, 8.26672 s, 634 MB/s ~# dd if=/dev/urandom of=/dev/null bs=1M count=5000 5000+0 records in 5000+0 records out...
  8. UdoB

    zfs send over 10Gbe horribly slow

    Yes, that's burned into my own mind too. But "something" has changed. I can read 5 GB from both devices with the same speed: ~# dd if=/dev/urandom of=/dev/zero bs=1M count=5000 5000+0 records in 5000+0 records out 5242880000 bytes (5.2 GB, 4.9 GiB) copied, 8.25407 s, 635 MB/s ~# dd...
  9. UdoB

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Yes, that's the idea behind "min_size=2" :-)
  10. UdoB

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Well, my whole point in the first post is that the absolute minimum is not a scenario I would like to use. If you think it will work fine for your use case: go for it! (No sarcasm, I mean it!)
  11. UdoB

    2 node cluster setup advice

    That decision is completely up to you :-) You seem to have some other equipment. I would check if a small VM or a container could be placed there - to implement the Quorum Device. On the other hand a physical third server, even if small and old, lifts some restrictions and offers some...
  12. UdoB

    CEPH Experimental POC - Non-Prod

    I am not a storage guru, sorry. But yeah, NFS is simple, old+stable and shared. A "simple" setup will create a SPOF though! For PVE I am utilizing ZFS with replication. It gives me the performance of local drives, does not introduce networking dependencies and qualifies as "shared" - as long as...
  13. UdoB

    Chrony NTP Server ändern

    "Muss" nicht, aber plausibler Weise: ja. man chrony.conf beschreibt die Details.
  14. UdoB

    Ceph memory issue

    Yes, I am/was keen to get some of them too. That's really a bummer :-( I wanted to put two to four OSD in each of it, the actual constrains should allow for four. Now look at https://docs.ceph.com/en/mimic/start/hardware-recommendations/#ram : 4 OSD = 4 * 3 to 5 GB = 12 ... 20 GB 1 MON = 2...
  15. UdoB

    The Kernel Crash: A Bug in ZFS

    Well..., ZFS ARC does shrink if required. But it does so slowly. (Edit: ...and if it is allowed to by "zfs_arc_min" being lower than _max) Too slow to be fast enough if one VM (or any other process) requests too much memory at once. RAM is the one resource you cannot over-commit drastically...