Recent content by freebeerz

  1. F

    Clean shutdown of whole cluster

    I recently had to move a cluster to a new physical location and tested a "naive" shutdown of the running 3 node cluster with some HA VMs (with shutdown policy = migrate) + ceph. I Just ran "shutdown now" via ssh on all the nodes (within one second) and not surprisingly this failed quite badly...
  2. F

    Adding a Second Public Network to Proxmox VE with Ceph Cluster

    I'll take the opportunity to thank you for your gist files! They were really useful when I set up my 3 nodes proxmox/ceph cluster with a mesh thunderbolt network a few months ago (I went with simple mesh without dynamic routing) :)
  3. F

    Adding a Second Public Network to Proxmox VE with Ceph Cluster

    I also have a private mesh network over thunderbolt for my 3 nodes ceph cluster and wondering if it's possible to mount cephfs on external clients. It doesn't look like it's possible to have monitors listening on multiple interfaces even with multiple subnets in "public_network" UPDATE: Ok so...
  4. F

    MS-01 HCI HA Ceph Sanity Check

    Are you still running this after more than 1 year? I also have 3 MS-01 but only a single enterprise micron 7300 pro 3.84TB m2 drive in each. I am thinking about buying some 22110 3.84TB 7400 PRO to use 2 ceph OSD per server, but I'm worried about excessive heat... What are your enterprise drives...
  5. F

    Ceph network failover

    Hi, I just set up a ceph ring mesh network on 3 nodes using the "simple routed setup" from the guide you posted (2 TB ports on each node, connected to the other 2 nodes), and I was wondering the same thing about using a spare ethernet interface connected to a dedicated switch as failover or even...
  6. F

    detect if VM/CT migration is running

    Hi, Is there a way to detect if a VM or CT migration is running, using the CLI or API? I'm writing an ansible playbook to do rolling updates for a proxmox cluster, the idea is to loop through each node in sequence: - move a node to maintenance - wait for VM/CT evictions to another node -...
  7. F

    migrate VMs in cluster on planned reboot only

    I tried that and nothing automatically migrates unless the VMs are in an HA group, but in that case they also restart automatically on another node if a node fails (which is what I want to avoid because lack of shared storage) My cluster HA setting is "migrate", would be nice to have something...
  8. F

    migrate VMs in cluster on planned reboot only

    Yes and that's perfectly fine with what I want to achieve: only live migrate on planned maintenance, and otherwise do not restart the VMs on another node if a node fails (and ideally if that failed node is still running but just can't connect to the cluster it should leave its VMs running, but I...
  9. F

    migrate VMs in cluster on planned reboot only

    It doesn't seem to be the case: if I reboot the node without any VMs in an HA group they just get shutdown automatically. If I put them in an HA group and reboot the node from proxmox, they gracefully migrate to another node (with a zfs replication) without downtime, but if I yank the network...
  10. F

    migrate VMs in cluster on planned reboot only

    Hi, I want to use HA safely in a proxmox cluster with no shared storage (only zfs replication). The behaviour I'd like for some VMs is this: - for planned node reboots (`reboot` command on shell or the UI reboot button): automatically live migrate the VMs to another node before node reboot -...
  11. F

    [SOLVED] help!!! grub rescue - proxmox 8.1

    I had the same issue (stuck on initramfs after booting from the USB stick with the EFI partition) My guess is that the boot cmdline is copied from the proxmox rescue boot when setting up the USB boot stick, so you need to change it to match your existing proxmox zfs root: I fixed it like this...
  12. F

    Install Proxmox in an OVH Vrack

    Hi, resurrecting this thread... So what's the best way to configure a proxmox cluster with OVH vrack on a bare metal host? I have 3 bare metal OVH servers, all in the same VRACK, and a public IP subnet assigned to the VRACK. So far I've done this: ... (vmbr0 has the OVH public IP, do not use...
  13. F

    VM with GPU passthrough freezes when turning off monitor after proxmox 6.2 upgrade

    I think you're right! I haven't tried but it's probably that same issue with the USB hub disappearing and not the GPU... I'm still on qemu 4.2 which doesn't have this problem but hopefully they'll fix it eventually
  14. F

    VM with GPU passthrough freezes when turning off monitor after proxmox 6.2 upgrade

    Only the guest instantly freezes. The host is fine, but it takes a SIGKILL to kill the kvm process. As I said the monitor has a USB hub that I use for my mouse/keyboard and which is passed through with usb redirection. When I turn off the monitor at the power socket, the hub obviously...
  15. F

    VM with GPU passthrough freezes when turning off monitor after proxmox 6.2 upgrade

    Hi, My setup: - upgraded from Promox 6.1 to 6.2 - Linux VM: Fedora 32 with GPU passthrough (AMD Sapphire 280x) => worked fine with Proxmox 6.1 - One monitor with integrated USB hub (keyboard + mouse) plugged into the GPU and USB devices passed through to the VM I have a very weird bug with the...