Search results

  1. G

    Slow memory leak in 6.8.12-13-pve

    Note : running PVE9 kernel 6.14.11-2-pve ceph 19.2.3-pve2 with Intel E810-C 4x25G in 802.3ad bonding on 4 HPE DL385 servers (1 TB RAM each) with default MTU 1500, we don't see a memory usage issue, or it's not growing fast enough to be visible in the "noise" (lots of VM).
  2. G

    Network crash after 3 or 4 hours

    No more network issue with our intel E810XXVDA4 on HPE DL385 since the rmmod irdma/ib_core/ib_uverbs
  3. G

    Network crash after 3 or 4 hours

    Next version of Alpine Linux will blacklist irdma by default : https://github.com/alpinelinux/aports/commit/313096cd1f8f6ec031437eddcad49595c7a7eb7f
  4. G

    Network crash after 3 or 4 hours

    Hi, Same happening once every few day on one of our 6 HPE DL385 E810-XXVDA4 with plugged 4x25G DAC running proxmox VE 9.0 kernel 6.14.11-2-pve, it's the box with the most traffic, I added comments on the intel ICE github issue...
  5. G

    Nested PVE (on PVE host) Kernel panic Host injected async #PF in kernel mode

    [32029.460800] Kernel panic - not syncing: Host injected async #PF in kernel mode [32029.472728] CPU: 2 UID: 0 PID: 136167 Comm: pvestatd Tainted: P O 6.14.8-2-pve #1 [32029.476915] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE [32029.481049] Hardware name: QEMU Standard PC...
  6. G

    Nested PVE (on PVE host) Kernel panic Host injected async #PF in kernel mode

    For the record on a debian 13 host (kernel 6.12.38) I ran a kvm (10.0.2) with PVE9 qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -m 8192 -drive file=vm1.qcow2,format=qcow2,if=virtio -k fr -netdev user,id=net0,hostfwd=tcp:127.0.0.1:8066-:8006,hostfwd=tcp:127.0.0.1:8022-:22 -device...
  7. G

    Proxmox Backup Server 3.4 released!

    Hi, I did a GC on one of our repo with the strace which gaves lines like [pid 1181406] utimensat(-1, "/mnt/datastore/datastore1/.chunks/bb9f/bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8", [UTIME_NOW, UTIME_OMIT], AT_SYMLINK_NOFOLLOW) = 0 After processing the output by...
  8. G

    Proxmox Backup Server 3.4 released!

    On this first GC I have a proxy measure as I was running iostat -xmt 10 during the GC. Measures varied from 10 MB/s to 70 MB/s write with a ballpark average of 20 MB/s. That means 27 minutes * 20 MB/s equals around 32 Gbyte of writes, and so 1300 bytes written per chunk on average through our...
  9. G

    Proxmox Backup Server 3.4 released!

    Hi, I upgraded our PBS from 3.3 to 3.4 and added the following tuning to our datastores GC : tuning chunk-order=none,gc-atime-cutoff=30,gc-atime-safety-check=1,gc-cache-capacity=8388608 I'm happy to report that our GC time went from 1h30-2h to 27mn on our SSD PBS and from 45-50mn to 15mn on...
  10. G

    Removing vlan id 1 from a trunk

    l already opened a bug report : https://bugzilla.proxmox.com/show_bug.cgi?id=3290
  11. G

    Removing vlan id 1 from a trunk

    Hi, I will first open a bug report. I and @spirit have signed the proxmox source code contributor agreement in case we need to propose a patch
  12. G

    Proxmox Datacenter Manager - First Alpha Release

    There's a bugzilla request open for remote-migrate with shared storage: https://bugzilla.proxmox.com/show_bug.cgi?id=4928 Both ceph and NFS could be of interest as shared storage. I assume technically it would be simpler since there's nothing to be done disk-wise during migration for shared...
  13. G

    proxmox-auto-install-assistant device-match mutliple disk by their SERIAL_ID ?

    Done : https://bugzilla.proxmox.com/show_bug.cgi?id=5493
  14. G

    proxmox-auto-install-assistant device-match mutliple disk by their SERIAL_ID ?

    Yes that's another way when you're able to match the set of disks you want the installer to choose. May be copying the disk_list syntax for filter.xxx would be a nice addition to the tool: filter.ID_SERIAL= ["*194K","*191V"] I don't think I'm alone wanting to do a RAID1 for system install and...
  15. G

    proxmox-auto-install-assistant device-match mutliple disk by their SERIAL_ID ?

    Hi, I have a machine with 8 disks from the same model (pretty common situation for a new server), I'm trying to match 2 of the disks for a ZFS RAID1 install by using their ID_SERIAL as follows: [disk-setup] filesystem = "zfs" zfs.raid = "raid1" filter.ID_SERIAL="*194K" filter.ID_SERIAL="*191V"...
  16. G

    How to build proxmox rust parts?

    Hi, In order to work on https://bugzilla.proxmox.com/show_bug.cgi?id=2370#c17 starting with a PVE 8.2 system I was able to rebuild proxmox-firewall program as follows: echo "deb http://download.proxmox.com/debian/devel bookworm main" >> /etc/apt/sources.list apt-get update apt-get dist-upgrade...
  17. G

    nftables : when output policy drop is set on a VM there's no way to accept ARP output

    Patch : https://bugzilla.proxmox.com/show_bug.cgi?id=2370#c17
  18. G

    nftables : when output policy drop is set on a VM there's no way to accept ARP output

    I think this works in iptables mode as Proxmox VE doesn't install arptables so everything ARP is ACCEPT by default whereas in nftables everything is unified so you have to add explicit rules and do it in both directions. Note : I tried to add the missing ether type arp accept but handles are...
  19. G

    nftables : when output policy drop is set on a VM there's no way to accept ARP output

    Hi, It looks like when output policy drop is set on a VM the ARP protocol in the out direction are filtered and there's no way I could find to enable it in the proxmox firewall settings. In the IN direction there's a ether type arp accept in chain guest-408-in but there's no equivalent in...
  20. G

    Low ZFS read performance Disk->Tape

    @Lephisto @dcsapak For the record on our Dell TL4000 LTO7 tape library with three drives (internally IBM 3573-TL + 3 ULT3580-HH7 drives) when setting up in parallel three tape backup jobs with three different media pools on three different drives we've seen up to 900 MByte/s of tape write...