Search results

  1. T

    [SOLVED] Performance comparison between ZFS and LVM

    Nope - ZFS was created for "Commodity hardware" - ie, no RAID, no fancy controllers needed etc. You should feed it directly the disks and let it do it's magic. It will digest hdd, ssd, nvme, direct attached memory mapped NOR ram (If you have mmap mappings) - it's pretty inteligent by being their...
  2. T

    [SOLVED] Performance comparison between ZFS and LVM

    Depends on what you intend to do with those pfsense instances. From my experience PFsense doesn't really require a lot of storage performance. I would create raid1 for HDD and raid1 for SSD - both through ZFS, makes life a lot easier if something goes wrong ! Leme tell you a story: I had a...
  3. T

    Ping with unprivileged user in LXC container / Linux capabilities

    Yeah, the good old ping. I wonder why Proxmox guys don't address it. But hey, maybe ping is old fashioned thing, we should be all using web 5.0 javapplets ? ;)
  4. T

    [SOLVED] Performance comparison between ZFS and LVM

    NAND datasheet is the only place, SSD manufacturer will lie their head of / their marketing material creating people have no idea what they are talking about / language barrier. first part - yes, the second part - no you mix the qemu to the equation, which complicates it by 2 orders of...
  5. T

    Proxmox 4.4 virtio_scsi regression.

    Well, you still might want to have the VM with directly passed in disks - virtualisation is not about being able to shift it across the machines, but about containment and resource sharing - If you have a pretty powerfull CPU but only use it from time to time, you might want to SHARE it, but...
  6. T

    Migration LXC CT with bind mount point

    I wonder why for past 6 years proxmox didn't implement that in GUI
  7. T

    [SOLVED] Performance comparison between ZFS and LVM

    OK, So maybe I will chip in some detail out of the "embedded side of computing". Situation with SSD's is VERY VERY VERY complicated. NAND flash uses very large sectors, I've seen nand chips having sectors size of ranging between 8k and 128K. What does it mean ?!: Well, process of writing to...
  8. T

    ceph - Warning

    What if somebody would like to NOT get the warning with "noout" flag being set ? IF anybody's interested, you can get similar functionality without warning through: ceph config set mon mon_osd_down_out_interval 0 ceph config set mon mon_warn_on_osd_down_out_interval_zero false
  9. T

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Hey ;) Today, over half of my test cluster got affected by this bug (4 nodes out of 6) within 5 minutes of each other - PDU with watchdog was doing overtime, however one node somehow even thou it had this error happening it was responding to ping. Not great for ping based PDU watchdog. As they...
  10. T

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    I think (or more hope) that those things are getting through the "backports" channel. Also it may depends on how severe the fix is. We just need to know which mainline kernel it got into then we can trace it into current version that pve kernel is based on.
  11. T

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Do you know whenever it already is or when it would show up in kernel ?
  12. T

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    Yeah, but you know ... this issue is affecting people since 2020 - I doubt that 6.5 was even in numbering pipeline then. Not saying that 6.5 being somehow magical, but that would require a double tap - a fix into 6.5 and then a mess-up post 6.5. You know I like coincidences, but double...
  13. T

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    And today like on queue, second server in the test cluster decided to noop out of the network with exactly the same messages in syslog, exactly the same hardware config, exactly the same interfaces file.
  14. T

    Error I40E_AQ_RC_ENOSPC, forcing overflow promiscuous on PF

    I've had similar problems and I've changed the interfaces file to what you have here, with only difference being: bridge-vids 2-4094 And today one of the the servers decided to spew out plenty of: Dec 24 22:38:52 asdf-3 kernel: i40e 0000:02:00.1: Error I40E_AQ_RC_ENOSPC, forcing overflow...
  15. T

    CEPH - one pool crashing can bring down other pools and derail whole cluster.

    That is interesting! I grant you that maybe my test setup did not replicate the original problem and is simply broken but this is something I can replicate. So for me if I pull two disk out of pool_2 I get "ceph error" (not ceph warn) and all VM's go down - which is a bit bizzare for me. I...
  16. T

    CEPH - one pool crashing can bring down other pools and derail whole cluster.

    @bbgeek17 - I've illustrated the problem with most minimalistic test cluster setup possible for anyone interested to test, production cluster is slightly different. @itNGO - as kindly as possible: I've replicated the problem on a test cluster that we've noticed in production and presented it...
  17. T

    CEPH - one pool crashing can bring down other pools and derail whole cluster.

    Hi, Since we've been migrating more and more stuff to ceph under proxmox, we've found a quirky behavior and I've built a test case for that on my test cluster. Create a small cluster with minimum 4 nodes. create one ceph pool sharing using one disk per node with 4 times mirroring, with minimum...
  18. T

    HEALTH_WARN 1 daemons have recently crashed

    An odd idea - maybe this could be an extension to the UI ?
  19. T

    Proxmox 4.4 virtio_scsi regression.

    hence it’s more dangerous - you do your preliminary tests for production, everything is cool, you dump you data in - few months later you hit a strange behaviour and realise all your data and backup is corrupt because it was slowly creeping in ;)