Recent content by tomtom13

  1. T

    proxmox-network-interface-pinning working and then not.

    @shanreich thanks. I've tried that, so far it worked, I just wanted to know that there is no magic kung-fu with root-fs/boot params/deeper systemd configuration.
  2. T

    proxmox-network-interface-pinning working and then not.

    Strange, we used CAN interfaces that uses whole networking stack and created interface per destination mailbox & destination combo ... it was interface bonanza (not saying that it was a sane solution but fitted the intended use case) ... I remember the names going up to 40 characters and how we...
  3. T

    proxmox-network-interface-pinning working and then not.

    @shanreich Thanks for the heads up ! Do you know where this limitation came from, I've never used long names for ethernet in my life, but I've used very long names for interfaces to drives that I've wrote in the past and those used networking stack (albeit not ethernet layer). Would it be more...
  4. T

    proxmox-network-interface-pinning working and then not.

    So, I've wanted to try pin interface names, and give the shot the proxmox blessed script. So I've run the script the for the motherboard 10G interface: pve-network-interface-pinning generate --interface enp12s0 --target-name eth_mobo_10g I've checked whenever interfaces file updated...
  5. T

    LXC Containers Backing Up Incredibly Slow

    @yarii dude, I gave up on advising zfs specific tools to proxmox chaps. "You can't make somebody change the opinion that they have convinced them self into". ATM only answer given is "give it more hardcore nvme hyper-high iop storage and cpu to boot". as a side note, I honestly still can't...
  6. T

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    For me issue kinda went away at some point with kernel upgrade, and I've just got plenty of that messages in syslog during bootup. I've also tried to upgrade X710 firmware a month ago with no improvement.
  7. T

    Migration LXC CT with bind mount point

    Sorry @Darkk I've misspoken, for me it works fine with JUST "shared=1" and I don't have "skip replication" enabled.
  8. T

    Migration LXC CT with bind mount point

    I've upgraded whole test cluster on Friday, and since then there were plenty crashes and relocations and I can confirm that I don't need the "shared=1" for containers to shift between nodes. Sorry @Darkk I've misspoken, for me it works fine with JUST "shared=1" and I don't have "skip...
  9. T

    [SOLVED] Performance comparison between ZFS and LVM

    Yeah, I've witnessed that to my own dismay. Anyway, here peps noticed that (I think) OP other user fessed up to pondering storage speed for use on firewall, which is like debating between ferrari and lambo where main concern is reverse speed into a parking spot.
  10. T

    Migration LXC CT with bind mount point

    Sorry, I'm still stuck in 8.3.5 - planing on updating soon when most bugs get shaken off (you know: "let someone else take the beach")
  11. T

    [SOLVED] Performance comparison between ZFS and LVM

    are you arguing that pfsense / opensense / monowall needs hyper fast storage ? Because only thing I see in that link that people complain that their chepo firewall (4200) killed the storage (possibly a tinny nvme with low TBW endurace) after two years of continuous log writes. If they used hdd...
  12. T

    [SOLVED] Performance comparison between ZFS and LVM

    Yeah, but c'mon - pfsense writes so little it makes no difference. One might as well make the FS ReadOnly and pfsense would not care that much, other than winging on not being able to log anything or save config.
  13. T

    [SOLVED] Performance comparison between ZFS and LVM

    Nope - ZFS was created for "Commodity hardware" - ie, no RAID, no fancy controllers needed etc. You should feed it directly the disks and let it do it's magic. It will digest hdd, ssd, nvme, direct attached memory mapped NOR ram (If you have mmap mappings) - it's pretty inteligent by being their...
  14. T

    [SOLVED] Performance comparison between ZFS and LVM

    Depends on what you intend to do with those pfsense instances. From my experience PFsense doesn't really require a lot of storage performance. I would create raid1 for HDD and raid1 for SSD - both through ZFS, makes life a lot easier if something goes wrong ! Leme tell you a story: I had a...
  15. T

    Ping with unprivileged user in LXC container / Linux capabilities

    Yeah, the good old ping. I wonder why Proxmox guys don't address it. But hey, maybe ping is old fashioned thing, we should be all using web 5.0 javapplets ? ;)