Search results

  1. A

    confused...CEPH delivering same performance on 100G as it did on 1G test

    that explains your observed performance. LACP is your first choice. if thats not possible, use active-backup and MAKE SURE the switches have plenty of bandwidth interconnecting them. balance-xor sounds good on paper but not in practice. set your expectations. bonding isnt the same as "adding."...
  2. A

    Is there a compatibility matrix for hardware?

    sure. https://www.proxmox.com/en/services/support-services/support I dont see any issues. boot storage could pose some specific challenges depending on hba model, but solvable. see https://pve.proxmox.com/wiki/Storage. shouldnt pose any issue; you'd just use lvm-thick without snapshot support.
  3. A

    Proxmox with ceph performance

    network interface mtu mismatch would decimate percieved performance, but there are other possibilities. while I'm not volunteering to check for you, you might want to ceph config dump ceph config show osd.x --show-with-defaults and go over it with a fine toothed comb. Last thing- in a pve...
  4. A

    confused...CEPH delivering same performance on 100G as it did on 1G test

    This doesnt result in any meaningful benefit vs just having the same address for public and private traffic. OP, if you have multiple switches, I would create laggs for public and private traffic- and make sure to cross physical nics (presuming nic4 and nic5 are actually nic1s0p0 and nic1s0p1...
  5. A

    IBM Plugin

    looking at the whitepaper, the author did much of the heavy lifting already. there's enough foundation for you to write the plugin. Having said that- making a supportable solution is still not a trivial task.
  6. A

    Small Datacenter Setup - What is the maximum number of pve servers supported in a cluster

    Read the link @bbgeek17 referenced. when you're done, you should have a realization that the problem you will run into isnt just how many NODES are in the cluster, but also how much virtual resources. PVE's solution for cluster metadata coordination is clever but does not scale very well; when...
  7. A

    Proxmox (as a company) - what the HELL are you doing? Kernel update to 7 broke networking IN A VM

    running software at home and for production are two completely seperate skillsets, mindsets, and realms of responsibility. As others have pointed out, you opted to install an optional kernel, and got bit. it happens. if you did that on a production environment without lab and approval and I was...
  8. A

    VM not booting up with LVM FC LUN storage

    I forgot df so we know what /mnt/pve/pVE-ISO points to. it looks like you're only using one of your LUNs for virtual disk use; I only see two volume groups so its a wonder where it is assigned to. do NOT assign it to PVE-DS01 as it is a shared lun; it will work with one node but thats bad practice.
  9. A

    Ceph with 2 Cluster Networks

    Just be sure you do NOT mix other traffic along with these, most especially corosync. if you have more then 4 interfaces keep the other forms of traffic on different interfaces. If you dont- consider only using two interfaces for ceph and two interfaces for other traffic.
  10. A

    Ceph with 2 Cluster Networks

    rather then quoting, I'll try to address all possible alternatives. ceph carries traffic on two seperate networks- public (host) and private (OSD-to-OSD.) Think of this as the host bus and disk bus on a RAID subsystem. While you can have both comingled, they're technically two seperate...
  11. A

    new drive setup, considering RAIDZ1

    up to you how you manage your models. in my experience, new models are released every week and I dont bother keeping the old ones. Feel free to keep your hoarded models on the zpool. its not like its getting any use ;)
  12. A

    new drive setup, considering RAIDZ1

    keep those on the nvme. they dont need any resilience, and you're likely to be replacing them quite often anyway.
  13. A

    new drive setup, considering RAIDZ1

    so yes :) l2arc almost never yields useful results. you're better off just using the drive seperately. More to the point- what is your usecase? in a homelab, its common that your bulk storage can be slow without any real impact. put your vms/cts on the nvme and keep your raidz1 for your "iso"...
  14. A

    IBM Plugin

    IBM Storage arrays can be one of many different solutions/topologies. It would be useful to mention what model/topology product you're referring to. Having said that- it most likely is a block device so yes, lvm would still be necessary if you're not direct mapping luns to vms, or using a CAF.
  15. A

    VM not booting up with LVM FC LUN storage

    post the output of /etc/pve/storage.cfg /etc/fstab lsblk vgs lvs
  16. A

    [PVE 9/ZFS-Based VM/LXC Storage] Why Shouldn't I Disable Swap Inside Linux VMs?

    zswap is great, but is no substitute for ram. if you are using a production environment (read: for money) just provision sufficient ram. yes its expensive, but its worth much more in consistent performance.
  17. A

    Is there a mod repository ? How to make mods ?

    where, pray tell, have I gotten in the way of such people? all I pointed out that you could already do what you asked.
  18. A

    Is there a mod repository ? How to make mods ?

    That is as succinct a description of the problem as you or I have posited so far. You want a system in place provided by the devs, but when looking at what it would take you correctly (if hyperbolically) estimate the work as requiring a "sun burning out" timeframe. The devs, rationally...
  19. A

    First Proxmox homelab build - sanity check on hardware, service layout, and storage

    This is true, but neglects we're living in 2026. git clone my_container cd my_container docker compose up -d In context, the guest disks contains no relevent data, and are pointless to back up. When you have multiple guests accessing the same data, it doesnt really make sense to use a...