Search results

  1. L

    Multipath iSCSI problems with 8.1

    Okay, basically it's exactly what's happening in the bugzilla report: I have a customer with a Proxmox Cluster who uses a shared Open-E iscsi Storage Cluster. These storage Clusters are somwhat strange and have some design issues, but until the latest patch there was a workaround for this...
  2. L

    Multipath iSCSI problems with 8.1

    So what does this tell me? I can't find a way to fix this. My next try would be to replace the perl modules by the ones of the previous commit, but this is not a update-proof solution, and i can't say more about other side effects.
  3. L

    Multipath iSCSI problems with 8.1

    is there any update on this? Verious iSCSI Cluster still won't work with Proxmox.
  4. L

    Kernel 6.8.4-2 causes random server freezing

    I have the exact same Effect on a 5 Node Epyc Milan / 7313P (Board Supermicro H12SSW-NTR) Cluster. Ran stable for a year, since 8.2/Kernel 6.8 random lockups of single Nodes after 1-3 days. Console freezes, no error Messages anywhere, just frozen. Rebooted all nodes today back to 6.5 - I will...
  5. L

    Low ZFS read performance Disk->Tape

    //update: This looks a whole lot better. Seems to sustain the LTO Drive speed except a very few dips. Thank you! regards.
  6. L

    Low ZFS read performance Disk->Tape

    We're getting somewhere: 2024-05-07T16:56:14+02:00: Starting tape backup job 'zfs:cephfs-mai:lto9:cephfs' 2024-05-07T16:56:14+02:00: update media online status 2024-05-07T16:56:16+02:00: media set uuid: c3d64c07-c811-48f0-9845-086cead14e55 2024-05-07T16:56:16+02:00: found 9 groups (out of 9...
  7. L

    Low ZFS read performance Disk->Tape

    .. build running. It just takes ages. I have v1 of your series + the fix. will that work?
  8. L

    Low ZFS read performance Disk->Tape

    Okay, so we can already rule out the Drive to not be able to sustain the specified 300MB/s.. so the bottleneck must either be the ZFS, or the Code that's pulling the data off.
  9. L

    Low ZFS read performance Disk->Tape

    I notice one thing, which is not totally logical: 2024-05-07T13:32:18+02:00: wrote 1272 chunks (4299.95 MB at 256.08 MB/s) 2024-05-07T13:32:37+02:00: wrote 1211 chunks (4305.72 MB at 255.80 MB/s) => 4305MB in 19s: 226,578947368 MB 2024-05-07T13:32:55+02:00: wrote 1490 chunks (4295.23 MB at...
  10. L

    Low ZFS read performance Disk->Tape

    What type of drive do you use?
  11. L

    Low ZFS read performance Disk->Tape

    Another observation: This is how it looks like in the Datastore graph - only the Tape backup job is running, but it constantly shows ~255MB/s: vs: 2024-05-06T20:01:25+02:00: backup snapshot "vm/11222/2024-05-05T20:35:05Z" 2024-05-06T20:02:05+02:00: wrote 7322 chunks (4295.75 MB at 183.45...
  12. L

    Low ZFS read performance Disk->Tape

    First I would like to provide you with some Specs: AMD EPYC 7313 16-Core Processor 12x Seagate Exos 20tb 4x 4tb NVME It's configured as raid-z3 with a 4-way mirror special device on nvme. It - seems - that the drop comes in after some time, a few minutes. I made another job...
  13. L

    Low ZFS read performance Disk->Tape

    More Update on the Topic: Increased the Thread to 16 (which is core count of the backup machine): 300MB/s (was <=200MB/s volatile) when running a tape job only. ~160-200MB/s (was <= 60MB/s volatile) when tape job was running while a verify job was running. //What I am noticing now is some...
  14. L

    Low ZFS read performance Disk->Tape

    Okay, was i impatient, built PBS myself and can conform significant improvements to the ZFS Performance on Spinners, especially when writing data to tape. Finally I can (at least if no other things are running) saturate the write performance of my LTO9 drive. Before the patch it maxed at...
  15. L

    Low ZFS read performance Disk->Tape

    Looking at the Thread again: If the gains are to visible for a single Spinner, the effect to have multiple IO threads should be even more Interessting on ZFS pools with several Disks...
  16. L

    Low ZFS read performance Disk->Tape

    Can I somehow test this without doing a complete PBS build myself? (eG. are there development deb's?)
  17. L

    Low ZFS read performance Disk->Tape

    This is awesome news, I am really looking forward to this..
  18. L

    PBS log noisy when receiving chunks from proxmox-backup-client

    So well, I investigated this a bit: PBS doesn't use rsyslogd by default, just systemd-journald. In theory there is the option to set LogFilterPatterns in the service definition for systemd for the proxmox-backup-proxy.server BUT: Bookworm has systemd 252, and the Option was introduced with...
  19. L

    PBS log noisy when receiving chunks from proxmox-backup-client

    Hi, i just noticed that PBS logging is very noisy: Mar 31 23:07:29 pbs-ba1-2 proxmox-backup-proxy[2878]: GET /chunk Mar 31 23:07:29 pbs-ba1-2 proxmox-backup-proxy[2878]: download chunk "/mnt/datastore/datastore/px11/.chunks/4583/4583ba2fb3e7c1086442e0> Mar 31 23:07:29 pbs-ba1-2...
  20. L

    Low ZFS read performance Disk->Tape

    hey @dcsapak, let me bump this up. Is there any news regarding tape write performance? thanks in advance.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!