Search results

  1. S

    HA or migration of VMs that are turned off on a node that is shut down or rebooted

    It's set to migrate, obviously, as that's what happens with VMs that are powered on. /etc/pve/datacenter.cfg console: vv ha: shutdown_policy=migrate keyboard: it migration: insecure,network=192.168.111.222/24 My issue is with VMs that are powered off. Those VM are already in the HA config...
  2. S

    proxmox 7.0 sdn beta test

    Hi, in my setup I have two nodes in a cluster (and a Qdevice witness), and from what I understand, I need to have spanning tree protocol enabled on the bridge to be able to use a redundant managed switch setup. So I have a vmbr0 configured with openvswitch and spanning tree protocol enabled...
  3. S

    HA or migration of VMs that are turned off on a node that is shut down or rebooted

    Hi, I have a cluster with two proxmox nodes at version 7.1 and a raspberry-like thing acting as corosync Qdevice to have quorum for HA. The nodes have replicated ZFS storage and a tiny Ceph cluster with "min size 1" (this Ceph setup is mostly for testing and holding ISOs and stuff). I'm seeing...
  4. S

    Odd Ceph Issues

    some arcane commandline spells I gathered from https://medium.com/opsops/recovering-ceph-from-reduced-data-availability-3-pgs-inactive-3-pgs-incomplete-b97cbcb4b5a1 and I used successfully to remove 4 "PG is incomplete" errors from my test cluster WARNING THIS WILL DESTROY DATA I don't care...
  5. S

    changing hardware settings of a VM with UEFI disk on Ceph breaks UEFI boot

    yes I configured ceph like that as this is a 2-node system, yes I know that I need an odd number of nodes if I want to run HA. I'm not using HA and I think (hope) it's mostly irrelevant for this issue. This is just a test system to see if I can migrate from my current setup, and if all goes well...
  6. S

    changing hardware settings of a VM with UEFI disk on Ceph breaks UEFI boot

    Thanks for the test. Hm, so it seems the only difference is that my cluster only has 2 nodes, will see what happens when I add another node. Although it's weird, cluster size shouldn't matter. Can you test with a Windows VM as well? I did mention I used Windows in my test above
  7. S

    changing hardware settings of a VM with UEFI disk on Ceph breaks UEFI boot

    Using a couple PCs as a test cluster, everything updated to latest version in both. I set up a Ceph from the GUI to create the storage for this cluster using drives inside these two PCs. Creating new VMs with UEFI and setting the UEFI disk on a Ceph pool works, I can install Windows and reboot...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!