Recent content by alexskysilk

  1. A

    Curious about kernel updates

    "kernel" being "out" doesnt mean anything directly to a distribution with a support policy. Instead, the maintainers of your linux distribution (in this case, ubuntu) will backport important changes that are deployed to downstream kernels. Translation- dont worry about the new kernels. Unless...
  2. A

    Shared Remote ZFS Storage

    I would retort with calling a solution shady because its imperfect to be an irrelevant argument to begin with. The difference between "first tier" and not ISNT that they are (perfect), its that they have the engineering capacity and support staff to identify, document, and resolve in a timely...
  3. A

    Shared Remote ZFS Storage

    There are vendors, and there are vendors. NetAPP is first tier. the fact that my wife's nephew put together a NAS using gum and bailing wire doesnt make him of the same caliber. As for trusing your data... on prem storage exists precisely so you dont have to. if you meant trust in their...
  4. A

    Proxmox roadmap - future direction of snapshot‑as‑volume‑chain

    Not an endorsement since I have not used myself, but Starwind StarLVM (the substrate of Starwind VSAN) looks like it would do what you ask. https://www.starwindsoftware.com/starwind-virtual-san
  5. A

    confused...CEPH delivering same performance on 100G as it did on 1G test

    that explains your observed performance. LACP is your first choice. if thats not possible, use active-backup and MAKE SURE the switches have plenty of bandwidth interconnecting them. balance-xor sounds good on paper but not in practice. set your expectations. bonding isnt the same as "adding."...
  6. A

    Is there a compatibility matrix for hardware?

    sure. https://www.proxmox.com/en/services/support-services/support I dont see any issues. boot storage could pose some specific challenges depending on hba model, but solvable. see https://pve.proxmox.com/wiki/Storage. shouldnt pose any issue; you'd just use lvm-thick without snapshot support.
  7. A

    Proxmox with ceph performance

    network interface mtu mismatch would decimate percieved performance, but there are other possibilities. while I'm not volunteering to check for you, you might want to ceph config dump ceph config show osd.x --show-with-defaults and go over it with a fine toothed comb. Last thing- in a pve...
  8. A

    confused...CEPH delivering same performance on 100G as it did on 1G test

    This doesnt result in any meaningful benefit vs just having the same address for public and private traffic. OP, if you have multiple switches, I would create laggs for public and private traffic- and make sure to cross physical nics (presuming nic4 and nic5 are actually nic1s0p0 and nic1s0p1...
  9. A

    IBM Plugin

    looking at the whitepaper, the author did much of the heavy lifting already. there's enough foundation for you to write the plugin. Having said that- making a supportable solution is still not a trivial task.
  10. A

    Small Datacenter Setup - What is the maximum number of pve servers supported in a cluster

    Read the link @bbgeek17 referenced. when you're done, you should have a realization that the problem you will run into isnt just how many NODES are in the cluster, but also how much virtual resources. PVE's solution for cluster metadata coordination is clever but does not scale very well; when...
  11. A

    Proxmox (as a company) - what the HELL are you doing? Kernel update to 7 broke networking IN A VM

    running software at home and for production are two completely seperate skillsets, mindsets, and realms of responsibility. As others have pointed out, you opted to install an optional kernel, and got bit. it happens. if you did that on a production environment without lab and approval and I was...
  12. A

    VM not booting up with LVM FC LUN storage

    I forgot df so we know what /mnt/pve/pVE-ISO points to. it looks like you're only using one of your LUNs for virtual disk use; I only see two volume groups so its a wonder where it is assigned to. do NOT assign it to PVE-DS01 as it is a shared lun; it will work with one node but thats bad practice.
  13. A

    Ceph with 2 Cluster Networks

    Just be sure you do NOT mix other traffic along with these, most especially corosync. if you have more then 4 interfaces keep the other forms of traffic on different interfaces. If you dont- consider only using two interfaces for ceph and two interfaces for other traffic.
  14. A

    Ceph with 2 Cluster Networks

    rather then quoting, I'll try to address all possible alternatives. ceph carries traffic on two seperate networks- public (host) and private (OSD-to-OSD.) Think of this as the host bus and disk bus on a RAID subsystem. While you can have both comingled, they're technically two seperate...
  15. A

    new drive setup, considering RAIDZ1

    up to you how you manage your models. in my experience, new models are released every week and I dont bother keeping the old ones. Feel free to keep your hoarded models on the zpool. its not like its getting any use ;)