Recent content by tomtom13

  1. T

    Migration LXC CT with bind mount point

    Sorry @Darkk I've misspoken, for me it works fine with JUST "shared=1" and I don't have "skip replication" enabled.
  2. T

    Migration LXC CT with bind mount point

    I've upgraded whole test cluster on Friday, and since then there were plenty crashes and relocations and I can confirm that I don't need the "shared=1" for containers to shift between nodes. Sorry @Darkk I've misspoken, for me it works fine with JUST "shared=1" and I don't have "skip...
  3. T

    [SOLVED] Performance comparison between ZFS and LVM

    Yeah, I've witnessed that to my own dismay. Anyway, here peps noticed that (I think) OP other user fessed up to pondering storage speed for use on firewall, which is like debating between ferrari and lambo where main concern is reverse speed into a parking spot.
  4. T

    Migration LXC CT with bind mount point

    Sorry, I'm still stuck in 8.3.5 - planing on updating soon when most bugs get shaken off (you know: "let someone else take the beach")
  5. T

    [SOLVED] Performance comparison between ZFS and LVM

    are you arguing that pfsense / opensense / monowall needs hyper fast storage ? Because only thing I see in that link that people complain that their chepo firewall (4200) killed the storage (possibly a tinny nvme with low TBW endurace) after two years of continuous log writes. If they used hdd...
  6. T

    [SOLVED] Performance comparison between ZFS and LVM

    Yeah, but c'mon - pfsense writes so little it makes no difference. One might as well make the FS ReadOnly and pfsense would not care that much, other than winging on not being able to log anything or save config.
  7. T

    [SOLVED] Performance comparison between ZFS and LVM

    Nope - ZFS was created for "Commodity hardware" - ie, no RAID, no fancy controllers needed etc. You should feed it directly the disks and let it do it's magic. It will digest hdd, ssd, nvme, direct attached memory mapped NOR ram (If you have mmap mappings) - it's pretty inteligent by being their...
  8. T

    [SOLVED] Performance comparison between ZFS and LVM

    Depends on what you intend to do with those pfsense instances. From my experience PFsense doesn't really require a lot of storage performance. I would create raid1 for HDD and raid1 for SSD - both through ZFS, makes life a lot easier if something goes wrong ! Leme tell you a story: I had a...
  9. T

    Ping with unprivileged user in LXC container / Linux capabilities

    Yeah, the good old ping. I wonder why Proxmox guys don't address it. But hey, maybe ping is old fashioned thing, we should be all using web 5.0 javapplets ? ;)
  10. T

    [SOLVED] Performance comparison between ZFS and LVM

    NAND datasheet is the only place, SSD manufacturer will lie their head of / their marketing material creating people have no idea what they are talking about / language barrier. first part - yes, the second part - no you mix the qemu to the equation, which complicates it by 2 orders of...
  11. T

    Proxmox 4.4 virtio_scsi regression.

    Well, you still might want to have the VM with directly passed in disks - virtualisation is not about being able to shift it across the machines, but about containment and resource sharing - If you have a pretty powerfull CPU but only use it from time to time, you might want to SHARE it, but...
  12. T

    Migration LXC CT with bind mount point

    I wonder why for past 6 years proxmox didn't implement that in GUI
  13. T

    [SOLVED] Performance comparison between ZFS and LVM

    OK, So maybe I will chip in some detail out of the "embedded side of computing". Situation with SSD's is VERY VERY VERY complicated. NAND flash uses very large sectors, I've seen nand chips having sectors size of ranging between 8k and 128K. What does it mean ?!: Well, process of writing to...
  14. T

    ceph - Warning

    What if somebody would like to NOT get the warning with "noout" flag being set ? IF anybody's interested, you can get similar functionality without warning through: ceph config set mon mon_osd_down_out_interval 0 ceph config set mon mon_warn_on_osd_down_out_interval_zero false
  15. T

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Hey ;) Today, over half of my test cluster got affected by this bug (4 nodes out of 6) within 5 minutes of each other - PDU with watchdog was doing overtime, however one node somehow even thou it had this error happening it was responding to ping. Not great for ping based PDU watchdog. As they...