Recent content by starshipeleven

  1. S

    Root ZFS on LUKS | Last questions (hopefully :) | Delaying the zfs import during boot

    probably missing an "initramfs" option in the crypttab. You want this to be opened during initramfs so it can be used to boot the system. the options must be luks,discard,initramfs and then update initramfs update-initramfs -u -k all seee...
  2. S

    HA or migration of VMs that are turned off on a node that is shut down or rebooted

    It's set to migrate, obviously, as that's what happens with VMs that are powered on. /etc/pve/datacenter.cfg console: vv ha: shutdown_policy=migrate keyboard: it migration: insecure,network=192.168.111.222/24 My issue is with VMs that are powered off. Those VM are already in the HA config...
  3. S

    proxmox 7.0 sdn beta test

    Hi, in my setup I have two nodes in a cluster (and a Qdevice witness), and from what I understand, I need to have spanning tree protocol enabled on the bridge to be able to use a redundant managed switch setup. So I have a vmbr0 configured with openvswitch and spanning tree protocol enabled...
  4. S

    HA or migration of VMs that are turned off on a node that is shut down or rebooted

    Hi, I have a cluster with two proxmox nodes at version 7.1 and a raspberry-like thing acting as corosync Qdevice to have quorum for HA. The nodes have replicated ZFS storage and a tiny Ceph cluster with "min size 1" (this Ceph setup is mostly for testing and holding ISOs and stuff). I'm seeing...
  5. S

    Odd Ceph Issues

    some arcane commandline spells I gathered from https://medium.com/opsops/recovering-ceph-from-reduced-data-availability-3-pgs-inactive-3-pgs-incomplete-b97cbcb4b5a1 and I used successfully to remove 4 "PG is incomplete" errors from my test cluster WARNING THIS WILL DESTROY DATA I don't care...
  6. S

    changing hardware settings of a VM with UEFI disk on Ceph breaks UEFI boot

    yes I configured ceph like that as this is a 2-node system, yes I know that I need an odd number of nodes if I want to run HA. I'm not using HA and I think (hope) it's mostly irrelevant for this issue. This is just a test system to see if I can migrate from my current setup, and if all goes well...
  7. S

    changing hardware settings of a VM with UEFI disk on Ceph breaks UEFI boot

    Thanks for the test. Hm, so it seems the only difference is that my cluster only has 2 nodes, will see what happens when I add another node. Although it's weird, cluster size shouldn't matter. Can you test with a Windows VM as well? I did mention I used Windows in my test above
  8. S

    changing hardware settings of a VM with UEFI disk on Ceph breaks UEFI boot

    Using a couple PCs as a test cluster, everything updated to latest version in both. I set up a Ceph from the GUI to create the storage for this cluster using drives inside these two PCs. Creating new VMs with UEFI and setting the UEFI disk on a Ceph pool works, I can install Windows and reboot...