Search results

  1. L

    Low ZFS read performance Disk->Tape

    What type of drive do you use?
  2. L

    Low ZFS read performance Disk->Tape

    Another observation: This is how it looks like in the Datastore graph - only the Tape backup job is running, but it constantly shows ~255MB/s: vs: 2024-05-06T20:01:25+02:00: backup snapshot "vm/11222/2024-05-05T20:35:05Z" 2024-05-06T20:02:05+02:00: wrote 7322 chunks (4295.75 MB at 183.45...
  3. L

    Low ZFS read performance Disk->Tape

    First I would like to provide you with some Specs: AMD EPYC 7313 16-Core Processor 12x Seagate Exos 20tb 4x 4tb NVME It's configured as raid-z3 with a 4-way mirror special device on nvme. It - seems - that the drop comes in after some time, a few minutes. I made another job...
  4. L

    Low ZFS read performance Disk->Tape

    More Update on the Topic: Increased the Thread to 16 (which is core count of the backup machine): 300MB/s (was <=200MB/s volatile) when running a tape job only. ~160-200MB/s (was <= 60MB/s volatile) when tape job was running while a verify job was running. //What I am noticing now is some...
  5. L

    Low ZFS read performance Disk->Tape

    Okay, was i impatient, built PBS myself and can conform significant improvements to the ZFS Performance on Spinners, especially when writing data to tape. Finally I can (at least if no other things are running) saturate the write performance of my LTO9 drive. Before the patch it maxed at...
  6. L

    Low ZFS read performance Disk->Tape

    Looking at the Thread again: If the gains are to visible for a single Spinner, the effect to have multiple IO threads should be even more Interessting on ZFS pools with several Disks...
  7. L

    Low ZFS read performance Disk->Tape

    Can I somehow test this without doing a complete PBS build myself? (eG. are there development deb's?)
  8. L

    Low ZFS read performance Disk->Tape

    This is awesome news, I am really looking forward to this..
  9. L

    PBS log noisy when receiving chunks from proxmox-backup-client

    So well, I investigated this a bit: PBS doesn't use rsyslogd by default, just systemd-journald. In theory there is the option to set LogFilterPatterns in the service definition for systemd for the proxmox-backup-proxy.server BUT: Bookworm has systemd 252, and the Option was introduced with...
  10. L

    PBS log noisy when receiving chunks from proxmox-backup-client

    Hi, i just noticed that PBS logging is very noisy: Mar 31 23:07:29 pbs-ba1-2 proxmox-backup-proxy[2878]: GET /chunk Mar 31 23:07:29 pbs-ba1-2 proxmox-backup-proxy[2878]: download chunk "/mnt/datastore/datastore/px11/.chunks/4583/4583ba2fb3e7c1086442e0> Mar 31 23:07:29 pbs-ba1-2...
  11. L

    Low ZFS read performance Disk->Tape

    hey @dcsapak, let me bump this up. Is there any news regarding tape write performance? thanks in advance.
  12. L

    Problems after upgrade to PVE 8.1.3

    Same issue - got a customer with Open-E storage (which is the opposite of open). So far the solution to mitigate the inherent split-brain issue was to have firewall rules that keep iscsi from connecting to the physical interfaces. This doesn't work any longer. Is there any news on a fix? I am...
  13. L

    Issue with qemu-ga/fsfreeze and NFSD running in Guest

    Well 5.15.0-101 is the latest Ubuntu Kernel, i check if its fixed.
  14. L

    Issue with qemu-ga/fsfreeze and NFSD running in Guest

    I have put it into the mailing list. I suspect it might be related to: https://bugzilla.kernel.org/show_bug.cgi?id=217123 << which should be fixed already. we'll see.
  15. L

    Issue with qemu-ga/fsfreeze and NFSD running in Guest

    Hi, I have a reproduceable Issue with the qemu-guest-agent and the NFS server running inside a guest VM. Each night, when snapshot backups are run, and thus fsfreeze is issues to the Guest VM, I have a nasty Kernel debug stuff in my logs: Mar 26 01:30:00 publikore-data qemu-ga: info...
  16. L

    DR to Proxmox

    My answer won't be helpful. I don't have windows on baremetal, and I think no one should. To dig into it: it'd require a big amount of engineering, to implement this. I am not aware that anything like this exists or is in development. good luck with this.
  17. L

    DR to Proxmox

    So you want a cold standby Copy from a non-proxmox Hypervisor to a proxmox hypervisor? Why would you do that? It's complicated, because you would have to automate all the tasks like exchanging drivers and so on. I run some cold-standby setups, but promox->proxmox.
  18. L

    DR to Proxmox

    You mean some sort of Standby Cluster where you can instantly start a copy of your VM?
  19. L

    qemu Guest Agent - wider support in proxmox

    GA can execute arbitrary commands with root/admin privileges. But I agree with @PSz : for automations/infrastructure as code there are more adequate tools like ansible.
  20. L

    DR to Proxmox

    PBS can do that: start-while-restore. It has it's limits like it has on Veam as well. Other DR Options would be to have a Cold-standby copy in a seperate cluster.