Search results

  1. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    15gb/s is a bit low for 2 containers on the same node and same network. Very low, i reach on lowend e5 v3, extremely old xeons, at least 30gb/s per core/stream
  2. R

    AMD pstate driver steps and discussion

    People are talking here about saving Power :) Im more in search of more performance, but more performance also means that the CPU doesn't get too hot. So is there anyone already with Epyc Milan/Rome/Genoa Servers out here and can report if its worth it over acpi? I thought of using...
  3. R

    High iops in host, low iops in VM

    Lets do it differently, i think you dont care about sync writes or parallelism things etc. I think you want simply better performance. There is indeed a performance issue with ZVOL's (every VM on your ZFS Storage) is using Zvol. Except if you defined a storage as "Directory" on the ZFS Pool...
  4. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    Still Debugging, but i found the Best example: LXC Conainers with 2 Cores Assigned and iperf3 test with -P 2 [ 5] 9.00-10.00 sec 1.61 GBytes 13.8 Gbits/sec 0 1.02 MBytes [ 7] 9.00-10.00 sec 4.10 GBytes 35.2 Gbits/sec 0 513 KBytes [SUM] 9.00-10.00 sec 5.71...
  5. R

    Random 6.8.4-2-pve kernel crashes

    Did anyone with kernel crashes, tryed to disable Hyperthreading? I have no kernel crashes here, but issues related to the scheduler, its not working how it should (i think). Still debugging the issue. But my issue is definitively related to HT, because everything that runs on HT-Cores has only...
  6. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    Im Further with my Investigation: If i set the LXC Container to use only one CPU-Core: ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 4.28 GBytes 36.7 Gbits/sec 0 508 KBytes [ 5] 1.00-2.00 sec 4.27 GBytes 36.8 Gbits/sec 0 537...
  7. R

    Performance: zvol vs. raw files on a dataset

    the "fix" was merged into 2.2.4 https://github.com/openzfs/zfs/pull/16098 But all the original PR is doing as far i understand is just using multithreading for that issue. Its not fixing ZVOL's main issue with the caches. However, as far i understand people should see at least an improvement...
  8. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    Single Socket Genoa 9375 / 12x DDR5 Memory Channels with 64GB-Dimms / Ultrafast Raid 10 out of 8x Micron 7450 MAX: 8 Streams: Ubuntu 24.04: [SUM] 0.00-10.00 sec 123 GBytes 105 Gbits/sec 0 sender [SUM] 0.00-10.00 sec 123 GBytes 105 Gbits/sec receiver...
  9. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    There is no difference and it has anyway nothing with iperf itself todo. You can do parallel with iperf2 and 3, there is no difference, but im not talking about parallel. Im talking about something in vmbridge or kernel or ip stack in the kernel, that is sometimes multithreaded and sometimes...
  10. R

    Relay to Mailservers based on TO: Email-Addr

    Hi, is it possible to use PMG to Route incoming Mails based on their destination mail address, to specific Mailservers? Thats called in German a "Mailweiche", but im not sure how it's called in English. The usecase is very simple, for example you have two companies that merged together, but...
  11. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    Okay i found the KEY issue. When it runs at 13-14Gbits/s, im hitting a Single Thread/Core limit on the PVE Host itself! But im not seeing which process is eating that one core, so that must be a Kernel or a Module. When it runs at 34Gbits/s, i am not hitting a Single Thread/Core limit, instead...
  12. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    i just migrated the Container back, so that both are again on the same Node and retested: iperf3 -c 172.17.1.122 Connecting to host 172.17.1.122, port 5201 [ 5] local 172.17.1.129 port 35156 connected to 172.17.1.122 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [...
  13. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    I just stumbled over something very weird with LXC Containers, but i bet it happens with VM's either: i have 2 Identical Nodes in a Cluster: - both are connected over 2x25G in LACP (NIC: Intel E810) - CPU: Genoa 9374F - RAM: 12x64GB (All Channels 1DPC) 768GB - Storage: ZFS Raid10 (8x Micron...
  14. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Hey @Dunuin the Answer from Chatgpt about recordsize: ---- In OpenZFS 2.2, the recordsize property defines the block size used for reading and writing. Larger recordsize values can be counterproductive for databases due to their random access patterns. However, they can improve disk space...
  15. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    The database is a great example where it makes sense to use smaller recordsize. Or any application that reads only a small part of a file. But PBS reads or writes the whole file, so it seems to me like 4m recordsize should have absolutely no downsides. After all the explanation! Thanks Dunuin...
  16. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Aah thanks for the recordsize explanation, makes sense. For the writes you're correct, but for the reads it would open a 4mb to get for example 800kb out of. But you say that with 4mb recordsize a small file would still save to 1mb? I don't get it, because that would mean that recordsize is...
  17. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Forget it, i answered my question myself. I will buy those 1,6TB Drives and set special_small_blocks to 256k. i have on the pool right now around 3,4TB of backups: find /datasets/Backup-HDD-SATA/ -type f -size -256k -print0 | du -ch --files0-from=- | grep total$ --> 104G total That means...
  18. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    i found this ones: https://geizhals.de/samsung-ssd-pm1735-1-6tb-mzplj1t6hbjr-00007-a2213016.html 2 of them in a mirror for a special vdev, is in the budget. Thats the cheapest option, since i dont need any adapters etc... However, its 1,6TB, more as i need. My understanding issues starting...
  19. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Thanks for the hint about atime lol. I didnt know it's important for PBS lol. 1% is still a lot for a 56/70TB Pool (i will increase it later), that means i would need at least 2 optane drives, at least something like 2x 905p with 480gb or better 4. Maybe ill find a cheap option. Not sure.
  20. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Hmmh, Thanks for the tipps! Yeah its a pool of hdd's, a special device is surely a benefit, but it drives the costs above the roof, for a little gain. -> The Server has no U.2/3, so PCIE Cards + at least 400€ for a drive and i need a minimum of 2 for a mirror. Thats +1000€ sth around. -...