Search results

  1. Sp00nman

    Option for "group" gone in edit ct/vm machine HA setting in Prox 9

    As far as i can tell now if i want to move a vm to another HA group i need to delete existing vm in resources migrate the vm to the host associated with the new group and then re-add the vm again in resources? Is this correct?
  2. Sp00nman

    Option for "group" gone in edit ct/vm machine HA setting in Prox 9

    Hi All, As above does anyone know where the option to change HA groups has disappeared to in the proxmox 9 GUI?
  3. Sp00nman

    Help! - Multipathing keeps breaking my root ZFS pool

    @bbgeek17 - Yes i think you are onto something - Nodes 1-7 all using enterprise disks, Nodes 8-18 using cheap commercial SSDs (got lots of spares to accommodate for failures) Will revert to using the raid controller on these hosts - this way the disk presented by the raid controller will have...
  4. Sp00nman

    Help! - Multipathing keeps breaking my root ZFS pool

    Hi there Let me try show you: - Below is a working node without the issue: Here is one that has the problem:
  5. Sp00nman

    Help! - Multipathing keeps breaking my root ZFS pool

    Literally all nodes are exactly the same otherwise - same HW, same setup and config, same multipath config, all patched and updated to latest version in the enterprise repo. The only difference is when they were deployed and which version of the proxmox iso was used.
  6. Sp00nman

    Help! - Multipathing keeps breaking my root ZFS pool

    Morning Proxmoxers! I have a serious problem and am at my wits end as to how to resolve: - I have an 18 node cluster each setup using 2 x sata SSD in ZFS raid1 for the OS & various storage appliances deployed using ISCSI and LVM thick provisioning for shared storage for cluster Nodes 1-7 were...
  7. Sp00nman

    Acme plugin not working suddenly

    Morning all, I have had an acme plugin setup via godaddy to manage my letsencrypt cert for a few years and it has suddenly stopped working: Loading ACME account details Placing ACME order Order URL: https://acme-v02.api.letsencrypt.org/acme/order/850590517/276691096917 Getting authorization...
  8. Sp00nman

    New vIOMMU setting in machines menu

    Found it: https://michael2012z.medium.com/virtio-iommu-789369049443
  9. Sp00nman

    New vIOMMU setting in machines menu

    Morning all, What does this new setting do? TIA
  10. Sp00nman

    lvm-thin over iscsi on the cards any time soon?

    Hi bbgeek - thank for the prompt response. Yes that is correct - we need thin provisioned shared storage for our use case - snapshots arent essential as we can do incremental backups using PBS
  11. Sp00nman

    lvm-thin over iscsi on the cards any time soon?

    Hi Proxmoxers, The only issue preventing our organisation from adopting Proxmox to replace our legacy VMware stack is that currently lvm-thin provisioning doesnt work over iscsi. Is this something that is being looked at in the development roadmap? Looking forward to your repsonses.
  12. Sp00nman

    Proxmox Ceph Cluster for prodution service

    Hi Proxmoxers, Im looking at setting up a storage solution for a production service to store archival video data for a managed service I am looking to start out this service using 3 x Poweredge R740/750's with later model dual socket Xeon cpus (64 vCPU) and 128GB ram each. For storage each...
  13. Sp00nman

    PBS on prem sync to PBS in azure

    Morning all, I want to setup a PBS vm inside Azure and to setup a twice daily sync job from my on prem PBS to the azure PBS so i have a complete off-site backup in place. Can i get away with using Std HDD attached disks (2000 IOPS for 10TB) for the VM storage for the datastore or do i need...
  14. Sp00nman

    NVIDIA a16 card GPU passthrough multiple VMs

    Bumping this old thread to find out if this is possible?
  15. Sp00nman

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Hi Stoiko - You are 100% correct - the kvm.tdp_mmu=N flag wasnt set correctly in /etc/kernel/cmdline I have now rectified the problem and will monitor: root@prox-lab-host-02:~# cat /sys/module/kvm/parameters/tdp_mmu N Thanks for your assistance
  16. Sp00nman

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    My lab hosts are all single socket Intel Xeon E-2186G (12) @ 4.700GHz
  17. Sp00nman

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Apologies I should have been more specific defo crashing: root@prox-lab-host-01:~# uname -a Linux prox-lab-host-01 5.15.39-1-pve #1 SMP PVE 5.15.39-1 (Wed, 22 Jun 2022 17:22:00 +0200) x86_64 GNU/Linux root@prox-lab-host-01:~# cat /etc/kernel/cmdline root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet...
  18. Sp00nman

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Bad news - over the last few days i have had 3 separate instances of win 2022 vms & 1 win 11 vm shutting themselves down with tdp_mmu=N configured on my lab 3 node cluster running kernel 5.15.39-1. Going to repin kernel 5.13 for now