Search results

  1. P

    Unable to enable specific CPU flag using cpu-models.conf

    Using qm showcmd when the CPU is set to default kvm64, I see that the lafh_lm flag is actually configured already: -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \ lscpu inside VM fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht...
  2. P

    Unable to enable specific CPU flag using cpu-models.conf

    There is no error. The VM can be started up but when I check the output of lscpu, there lafh-lm flag is missing. The rest of the additional flags appear as expected.
  3. P

    Where can I find a basic OVF template XML for importing VMs into Proxmox?

    As I had to get that task done quickly back then, I simply extended my script to generate the commands to create the VM and attach the disks.
  4. P

    Unable to enable specific CPU flag using cpu-models.conf

    Using PVE 7.2. For max live migration compatibility I had been sticking to kvm64 but some requirements now need some VMs to use x86-64v2 spec. The flags used for the cpu model inside cpu-models.conf are: flags: +cx16;+lahf-lm;+popcnt;+sse4.1;+sse4.2;+ssse3 I was able to enable all the other...
  5. P

    [SOLVED] Unable to move VM back to failed node after recovery because local storage not available on current node

    Overlooked reporting back on this issue. The root cause was actually in the message but misunderstood due to the context of me using ZFS only on that node. ProxMox complaining about unable to activate local_zfs is because the disk for ZFS was dead. Once the disk issue was fixed, the migration...
  6. P

    [SOLVED] Unable to move VM back to failed node after recovery because local storage not available on current node

    @fabian Thanks for the confirmation. I was able to move the VM back to OLD_NODE then delete the snapshots/disks. To verify that everything is now OK, I use the GUI to migrate the shutdown VM to NEW_NODE and then tried to migrate it back to OLD_NODE. Since there are no longer any snapshot or...
  7. P

    [SOLVED] Unable to move VM back to failed node after recovery because local storage not available on current node

    I shutdown the VM cleanly. Executed the command on the old node as well as on the new node. No output on the command line but from web UI, I can see the task appear in the log and fails with the same reason. Can I simply move a conf file or something from <CURRENT_NODE> to <OLD_NODE>? Tried to...
  8. P

    [SOLVED] Unable to move VM back to failed node after recovery because local storage not available on current node

    I have a PVE 7.2-7 HA cluster with Ceph. Some VMs were originally on local ZFS storage on Node A which was our first node before Ceph was installed. They were migrated to Ceph storage eventually but unfortunately I overlooked that there were some snapshots and unused disks still on Node A local...
  9. P

    VM Shared Disk Between Guests

    Thanks, would you happen to had measured the difference in performance or able to estimate how big a difference did it make?
  10. P

    VM Shared Disk Between Guests

    Hi, did you ever test the performance difference between sharing the disk directly vs going through an intermediate iSCSI host VM?
  11. P

    Same VM on server and multiple simultaneous clients

    If the users are just going to use the browser then you can always drop Windows 10 and use a Linux desktop VM. That should allow multiple users.
  12. P

    Updating network settings in GUI deletes bond second IP

    Thanks, that worked and the interface shows up in ProxMox. Can't remember why I didn't use this form when I first set it up. While it is not recognized and uneditable in the GUI, at least ProxMox does not remove it when updating network configuration.
  13. P

    Updating network settings in GUI deletes bond second IP

    I have a bond interface configured with two network adapters. This is on all my nodes. The bond interface carries two networks 10.1.1.x/24 and 10.1.2.x/24 auto bond0 iface bond0 inet static address 10.1.1.2/24 bond-slaves enp1s0f0np0 enp1s0f1np1 bond-miimon 100...
  14. P

    PVE Cluster - One node `pveproxy` crashing seemingly randomly

    I am encountering pretty much the same problem. Please share how did you resolve this in the end?
  15. P

    Enable HA on all VM's

    The awk syntax doesn't appear to work properly in current Proxmox, it gives me just one particular vmid out of the entire list for some reason. Adjusting it to match 0-9 appears to work properly qm list 2>/dev/null | awk '/\[0-9]+/ {print "vm:", $1, "\n"}' >> /etc/pve/ha/resources.cfg
  16. P

    Turn on "Discard" option?

    write-back is unsafe, the chances of losing data is much higher. This needs to be mentioned before recommending it for performance.
  17. P

    No network after reboot - dependency issue

    I encounter the same problem after a node crash, networking for VMs on the node are not working until migrated to another node. However, I am unable to reinstall ifupdown2 when I tried. Did you not get the following error? apt-get install --reinstall ifupdown2 Reading package lists... Done...
  18. P

    Cephfs - MDS all up:standby, not becoming up:active

    @jw6677 How did you get your mds active in the end?
  19. P

    Where can I find a basic OVF template XML for importing VMs into Proxmox?

    XML from QNAP Virtualization Station also from virsh dumpxml Generates similar error with --dryrun warning: unable to parse the VM name in this OVF manifest, generating a default value { "disks" : [], "qm" : { "cores" : "", "memory" : "" } } Without --dryrun. the error is...
  20. P

    Where can I find a basic OVF template XML for importing VMs into Proxmox?

    XMl from standard KVM virsh dumpxml which produces the following errro with --dryrun warning: unable to parse the VM name in this OVF manifest, generating a default value { "disks" : [], "qm" : { "cores" : "", "memory" : "" } } Without --dryrun warning: unable to parse...