Search results

  1. L

    Multipath iSCSI problems with 8.1

    This urgently needs to get some attention by Proxmox devs. Multipath is basically broken. Yes, I know that iSCSI ist legacy tech, and I avoid it where I can, but still a lot of customers of me, especially those coming from VMWare often bring iSCSI Clusters.
  2. L

    Community Subscription but cannot access Enterprise repos?

    checktime: 1716975297 key: pve1c-<redacted> level: c nextduedate: 2025-05-28 productname: Proxmox VE Community Subscription 1 CPU/year regdate: 2024-05-28 00:00:00 serverid: <redacted> sockets: 1 status: active url: https://www.proxmox.com/en/proxmox-virtual-environment/pricing I have the...
  3. L

    Community Subscription but cannot access Enterprise repos?

    Hi, I have a problem at a customers site: The keys are entered and deemed as "Status: active", but when doing an apt-get update I get: root@px1n1# apt update Hit:1 http://security.debian.org bookworm-security InRelease Hit:2 https://repos.influxdata.com/debian stable InRelease...
  4. L

    Kernel 6.8.4-2 causes random server freezing

    No, 6.5 doesn't have the freeze issue.
  5. L

    Kernel 6.8.4-2 causes random server freezing

    Any word from the Proxmox Team on this? I am stuck somehow..
  6. L

    Low ZFS read performance Disk->Tape

    @ALFi thanks for that in-depth insights.
  7. L

    Kernel 6.8.4-2 causes random server freezing

    Please keep us posted. Over here we had a complette shutdown of the whole cluster after the Update, so all VMs have been "born" on 6.8..
  8. L

    Low ZFS read performance Disk->Tape

    Yeah, i will just reset the Config once this goes upstream. Any estimate?
  9. L

    Kernel 6.8.4-2 causes random server freezing

    I am really a little bit surprised that they put this one in the enterprise repos. I thought they were supposed to be tested for extra stability :(
  10. L

    Multipath iSCSI problems with 8.1

    Okay, basically it's exactly what's happening in the bugzilla report: I have a customer with a Proxmox Cluster who uses a shared Open-E iscsi Storage Cluster. These storage Clusters are somwhat strange and have some design issues, but until the latest patch there was a workaround for this...
  11. L

    Multipath iSCSI problems with 8.1

    So what does this tell me? I can't find a way to fix this. My next try would be to replace the perl modules by the ones of the previous commit, but this is not a update-proof solution, and i can't say more about other side effects.
  12. L

    Multipath iSCSI problems with 8.1

    is there any update on this? Verious iSCSI Cluster still won't work with Proxmox.
  13. L

    Kernel 6.8.4-2 causes random server freezing

    I have the exact same Effect on a 5 Node Epyc Milan / 7313P (Board Supermicro H12SSW-NTR) Cluster. Ran stable for a year, since 8.2/Kernel 6.8 random lockups of single Nodes after 1-3 days. Console freezes, no error Messages anywhere, just frozen. Rebooted all nodes today back to 6.5 - I will...
  14. L

    Low ZFS read performance Disk->Tape

    //update: This looks a whole lot better. Seems to sustain the LTO Drive speed except a very few dips. Thank you! regards.
  15. L

    Low ZFS read performance Disk->Tape

    We're getting somewhere: 2024-05-07T16:56:14+02:00: Starting tape backup job 'zfs:cephfs-mai:lto9:cephfs' 2024-05-07T16:56:14+02:00: update media online status 2024-05-07T16:56:16+02:00: media set uuid: c3d64c07-c811-48f0-9845-086cead14e55 2024-05-07T16:56:16+02:00: found 9 groups (out of 9...
  16. L

    Low ZFS read performance Disk->Tape

    .. build running. It just takes ages. I have v1 of your series + the fix. will that work?
  17. L

    Low ZFS read performance Disk->Tape

    Okay, so we can already rule out the Drive to not be able to sustain the specified 300MB/s.. so the bottleneck must either be the ZFS, or the Code that's pulling the data off.
  18. L

    Low ZFS read performance Disk->Tape

    I notice one thing, which is not totally logical: 2024-05-07T13:32:18+02:00: wrote 1272 chunks (4299.95 MB at 256.08 MB/s) 2024-05-07T13:32:37+02:00: wrote 1211 chunks (4305.72 MB at 255.80 MB/s) => 4305MB in 19s: 226,578947368 MB 2024-05-07T13:32:55+02:00: wrote 1490 chunks (4295.23 MB at...
  19. L

    Low ZFS read performance Disk->Tape

    What type of drive do you use?
  20. L

    Low ZFS read performance Disk->Tape

    Another observation: This is how it looks like in the Datastore graph - only the Tape backup job is running, but it constantly shows ~255MB/s: vs: 2024-05-06T20:01:25+02:00: backup snapshot "vm/11222/2024-05-05T20:35:05Z" 2024-05-06T20:02:05+02:00: wrote 7322 chunks (4295.75 MB at 183.45...