Search results

  1. A

    new drive setup, considering RAIDZ1

    keep those on the nvme. they dont need any resilience, and you're likely to be replacing them quite often anyway.
  2. A

    new drive setup, considering RAIDZ1

    so yes :) l2arc almost never yields useful results. you're better off just using the drive seperately. More to the point- what is your usecase? in a homelab, its common that your bulk storage can be slow without any real impact. put your vms/cts on the nvme and keep your raidz1 for your "iso"...
  3. A

    IBM Plugin

    IBM Storage arrays can be one of many different solutions/topologies. It would be useful to mention what model/topology product you're referring to. Having said that- it most likely is a block device so yes, lvm would still be necessary if you're not direct mapping luns to vms, or using a CAF.
  4. A

    VM not booting up with LVM FC LUN storage

    post the output of /etc/pve/storage.cfg /etc/fstab lsblk vgs lvs
  5. A

    [PVE 9/ZFS-Based VM/LXC Storage] Why Shouldn't I Disable Swap Inside Linux VMs?

    zswap is great, but is no substitute for ram. if you are using a production environment (read: for money) just provision sufficient ram. yes its expensive, but its worth much more in consistent performance.
  6. A

    Is there a mod repository ? How to make mods ?

    where, pray tell, have I gotten in the way of such people? all I pointed out that you could already do what you asked.
  7. A

    Is there a mod repository ? How to make mods ?

    That is as succinct a description of the problem as you or I have posited so far. You want a system in place provided by the devs, but when looking at what it would take you correctly (if hyperbolically) estimate the work as requiring a "sun burning out" timeframe. The devs, rationally...
  8. A

    First Proxmox homelab build - sanity check on hardware, service layout, and storage

    This is true, but neglects we're living in 2026. git clone my_container cd my_container docker compose up -d In context, the guest disks contains no relevent data, and are pointless to back up. When you have multiple guests accessing the same data, it doesnt really make sense to use a...
  9. A

    First Proxmox homelab build - sanity check on hardware, service layout, and storage

    In OPs case, many if not all the containers are accessing the same data, which would be passed as a mountpoint. It wouldnt really make sense to do that, since there's nothing to be gained backing up the container itself.
  10. A

    First Proxmox homelab build - sanity check on hardware, service layout, and storage

    When you say "mainstream advice" its good to note the background, reasoning, and authority of whoever you are quoting. In an enterprise environment with security policies, or on a dirty (shared) hypervisor the advice certainly applies- but its a private homelab; I wholeheartedly recommend you...
  11. A

    First Proxmox homelab build - sanity check on hardware, service layout, and storage

    Before answering the question (its really just the one) I encourage you to really consider the implications of an all docker workload. docker images are immutable and live in a repo; the only things you MIGHT want to keep a copy of is the docker-compose and container config, and those should...
  12. A

    Is there a mod repository ? How to make mods ?

    I dont really understand. What stops you from doing this? not a single one of your asks is necessary for the operation of a pve node or cluster, but a lot of what you ask (and some you didnt) is included by my general purpose post install script. Where is it and why cant you use it? because it...
  13. A

    First Proxmox homelab build - sanity check on hardware, service layout, and storage

    You are asking a lot from a relatively modest host. its doable, but you'll need to temper your "performance" expectations. on that subject- That will multiply the "performance expectations issue" substantially. In addition, doing a passthrough of an igpu to a vm is not "saving pain," it is...
  14. A

    Ram report issue to multiple Hosts

    Virtualization makes the most sense when you use it to break down work into small chunks. Imagine trying to fit tetris pieces that are 4 squares, and then you have a piece that is 48 squares in size. If your use case is really that big, the only reason NOT to run it on metal is if its a...
  15. A

    Hook Script to disable pbs storage

    I'd consider this a bug. Please report it here: https://bugzilla.proxmox.com/
  16. A

    Hook Script to disable pbs storage

    pbs storage isnt like NFS; if its not present it will not hang your host. I would investigate your logs a bit more carefully to see what the actual culprit is- perhaps you have other items in /etc/fstab and/or /etc/pve/storage.cfg?
  17. A

    Proxmox and Veeam Backup and Replication worker issue

    what @bbgeek17 suggested should have been the first thing Veeam support instruct you to do. That kind of experience with their support (and other things) have led me to abandon using Veeam with PVE.
  18. A

    Joining new nodes simultaneously de-stabilizes and revert the cluster back to standalone hosts

    vmbr0 and vmbr1 are taken from your existing configuration... if they dont work, you have bigger problems. iface bond0.661 inet static ...etc You keep using that term. I'm unfamiliar with such a topology- is it ethernet?
  19. A

    Joining new nodes simultaneously de-stabilizes and revert the cluster back to standalone hosts

    The physical layer bringing your interfaces to your hosts isnt the relevent factor, its how you manage your logical networking configuration. I assume your ethernet interfaces you describe are 2x 25gbit and 2x 1G, so you have 4 interfaces (the 8gb are likely fc.) so you have two bonds, one per...