Recent content by sigmarb

  1. S

    Backup VMs in Cluster to local SSD-Storage in one node

    We already have a local PBS. However we would like to have full-backups in a single file that can be restored easily. We like the PBS approach and use it heavily, but for desaster-recovery, a single file without and dependencies is preferred.
  2. S

    Backup VMs in Cluster to local SSD-Storage in one node

    Dear folks, i would like to keep offline, air-gapped backups of all of our VMs in a 3-node cluster. We are aware of PBS and already using it. However i would like to use physical local media. My first idea was to mount & rotate some big SSDs on a node, then export the share via NFS to others...
  3. S

    Issue with arc_summary after upgrading to zfsutils-linux/stable 2.2.0-pve3

    can relate. same issue here. proxmox-ve: 8.1.0 (running kernel: 6.2.16-15-pve) pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79) proxmox-kernel-helper: 8.1.0 pve-kernel-6.2: 8.0.5 proxmox-kernel-6.5: 6.5.11-8 proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8...
  4. S

    Apparmor denies access to /var/lib/openntpd/db/ntpd.drift

    For the record, as it is present again in Debian12/Proxmox 8, i just created another bug: https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=ntpsec 2023-12-22T10:46:28.551247+01:00 srv42 kernel: [1569581.071493] audit: type=1400 audit(1703238388.546:160): apparmor="DENIED" operation="mknod"...
  5. S

    DISABLE_TLS_1_2=1 kein TLS mehr zur Verfügung für Web-UI nach Update auf PM8

    Hi Forum, nach dem Update unserer PM-Knoten auf PM8 (Debian 12, OpenSSL 3.x) wird ein SSL-Zertifikate auf dem System angemeckert. Pveproxy liest zwar das Zertifikat, aber zeigt im Browser nur noch: Fehlercode: PR_END_OF_FILE_ERROR Wir haben in /etc/default/pveproxy DISABLE_TLS_1_2=1...
  6. S

    Intel XL710 40G + QSFP+ AOC/DAC-cables - very high latency

    Yes. I contacted Intel and they also have no clue, so I just accepted the fact. See: https://community.intel.com/t5/Ethernet-Products/Intel-XL710-40G-QSFP-AOC-DAC-cables-very-high-latency/m-p/1423968
  7. S

    ifupdown2 /e/n/interfaces does not accept bond-mode broadcast but 3

    Hi Forum, we use latest proxmox 7 and set the bond-mode in the web-interface to broadcast. /etc/network/interfaces contains bond-mode broadcast afterwards. After rebooting the server, cat /proc/net/bonding/bond0 shows active-backup. Checking /e/n/interfaces again, bond-mode broadcast is still...
  8. S

    [SOLVED] vmbr0.100 VLAN interfaces not coming up automatically

    @Moayad - that was the solution. thank you! :)
  9. S

    [SOLVED] vmbr0.100 VLAN interfaces not coming up automatically

    Hi, no. ifupdown2-package is not installed. Only ifupdown.
  10. S

    [SOLVED] vmbr0.100 VLAN interfaces not coming up automatically

    Hi folks, since years I'm using the following stanza in e/n/i: auto vmbr0 iface vmbr0 inet manual bridge-ports bond1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.1.11...
  11. S

    KNET/corosync Link down and instantly SSDs fail - interface fatal error

    Thank you for your time. Unfortunately not. We see those issues since more than 2 years with this specific server. Just installed latest 5.19 and will report back. Update 05/11/22: Still same errors with 5.19.17-1-pve.
  12. S

    KNET/corosync Link down and instantly SSDs fail - interface fatal error

    Hi folks, on a specific Thomas Krenn Server (part of a 3 node cluster with ceph) Manufacturer: Supermicro Product Name: H11DSi-NT Version: 2.00 with dual AMD EPYC 7301 16-Core Processor we see strange errors. corosync detects link failure and after that, a SSD reports...
  13. S

    [SOLVED] Backup restore error

    Just for the next lost souls. It was no hardware issue at all for us. Solution for us is here documented: https://www.cubewerk.de/2022/10/25/vma-restore-failed-short-vma-extent-compressed-data-violation/
  14. S

    Intel XL710 40G + QSFP+ AOC/DAC-cables - very high latency

    Hi Folks, we have a 3 node cluster with one independent ceph-ring that is directly connected between the 3 nodes. (N1->N2, N2->N3) with QSFP+ AOC-cables¹. Here we have very bad latency on ping tests. The directly connected setup works flawlessly on other clusters. The only difference is we...