Search results

  1. A

    Virtual IP

    Context matters. What are you trying to keep alive?
  2. A

    Virtual IP

    Since in a pve environment any node can serve as api head, the simplest approach is to use a reverse proxy, typically run on your router but you can use a vm for this too. ngnix or haproxy work fine for this, you can use any of the million tutorials available.
  3. A

    "Best Proxmox Version for Adding a Node to 8.2.2 Cluster?"

    There is no rational reason to run a cluster with different version nodes. There is RARELY a reason to run a cluster without being fully updated- and even then, fully updated with an older kernel pinned.
  4. A

    Proxmox ZFS Migration

    this is fairly straightforward. step 1: ensure autoexpand is enabled: zpool set autoexpand=on rpool step 2: use the instructions here to replace the first disk: https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration step 3: either wait for rebuild to complete, OR follow the instructions...
  5. A

    Proxmox with HPE 3Par storage & LVM, lun issue

    Yes. Stop doing that. Neither the storage nor your hypervisor are designed for use with constantly changing disk presentation, so there is no tooling for it. If you REALLY want to do this you'll need to develop your own tooling.
  6. A

    Proxmox newbie, long-time Linux user.

    Ah understood. So as @bbgeek17 mentioned this functionality is not present in PVE (at least not yet.) HOWEVER since PVE sits atop of debian its possible to home cook solutions for this using the API; there are a number of projects and discussions on the forum on user's solutions- example...
  7. A

    Struggling to get qcow2 option

    WD_480 is of type lvm. as mentioned above, qcow is only an option for fileshare types. see https://pve.proxmox.com/wiki/Storage for more information.
  8. A

    Fiberchannel Shared Storage Support

    I see this constantly. "speed" is NOT the only metric by which a storage solution is measured. In truth, you RARELY use whatever "speed" a subsystem is capable of, but you depend on features regularly. If you can get by without integrated filesystem level checksum, PVE intefrated snapshots...
  9. A

    Help! - Multipathing keeps breaking my root ZFS pool

    Why in god's name are you allowing multipathd trap your SATA DISKS? dont do that. there's literally zero benefit and added failure domain.
  10. A

    Struggling to get qcow2 option

    the "format" your vm's will be written as is a consequence of the store type. Post the content of /etc/pve/storage.cfg to get specific feedback.
  11. A

    vmbr0 and vmbr1 not starting after reboot!

    You have two options. option 1: https://wiki.debian.org/NetworkConfiguration option 2: reinstall and this time DONT do whatever you did before that lead to your network configuration.
  12. A

    vmbr0 and vmbr1 not starting after reboot!

    I have no idea what I'm looking at- and even with that I can search for eno3 or 7df06 and get no hits.
  13. A

    vmbr0 and vmbr1 not starting after reboot!

    unless your interfaces changed names, they would be brought up during boot. check your interface names and look through your dmesg/journal to see whats going on.
  14. A

    create ceph disk, error wiping

    Thats not particularly actionable ;) smarctl --test=long /dev/sdc
  15. A

    Proxmox newbie, long-time Linux user.

    It could be, especially if is a vanilla esxi play. You would need to evaluate your usecase in its totality (distributed switches, storage technology, vrops, DRS, etc) PVE's scope is quite a subset of the entirety of the Vsphere stack. You'd need to be more specific. in what context? I guess...
  16. A

    vmbr0 and vmbr1 not starting after reboot!

    post the content of /etc/network/interfaces
  17. A

    How to find the disk with errors from a log with only SATA errors ?

    lsscsi --verbose you're looking for bus id 6.00 but with only 4 drives just run a full smart test on ALL of them.
  18. A

    CEPH on multipathed devices - override default behaviour

    Its not. if you really insist on using this hardware- make all disks passthrough (or single drive raid0, although that can still have issues when mapped as OSDs) and connect each storage device DIRECTLY to a SINGLE HOST. there's no point or benefit in multipathing for your usecase.
  19. A

    Dutch Proxmox Day 2025 - Free Proxmox Community Event

    hmm. I'm due for a Eurotrip. maybe I can swing it if they'll still let me in with an American passport ;)
  20. A

    Migration in failed situation without HA ?

    umm... I dont understand your network. why do you need cascading routers? EITHER your WRT or Opnsense can and should manage your entire network. It could work, but this describes a proxmox cluster which I thought you wanted to avoid. Seriously, why are you looking for so much convolution? just...