Search results

  1. P

    Unable to enable specific CPU flag using cpu-models.conf

    Using PVE 7.2. For max live migration compatibility I had been sticking to kvm64 but some requirements now need some VMs to use x86-64v2 spec. The flags used for the cpu model inside cpu-models.conf are: flags: +cx16;+lahf-lm;+popcnt;+sse4.1;+sse4.2;+ssse3 I was able to enable all the other...
  2. P

    [SOLVED] Unable to move VM back to failed node after recovery because local storage not available on current node

    Overlooked reporting back on this issue. The root cause was actually in the message but misunderstood due to the context of me using ZFS only on that node. ProxMox complaining about unable to activate local_zfs is because the disk for ZFS was dead. Once the disk issue was fixed, the migration...
  3. P

    [SOLVED] Unable to move VM back to failed node after recovery because local storage not available on current node

    @fabian Thanks for the confirmation. I was able to move the VM back to OLD_NODE then delete the snapshots/disks. To verify that everything is now OK, I use the GUI to migrate the shutdown VM to NEW_NODE and then tried to migrate it back to OLD_NODE. Since there are no longer any snapshot or...
  4. P

    [SOLVED] Unable to move VM back to failed node after recovery because local storage not available on current node

    I shutdown the VM cleanly. Executed the command on the old node as well as on the new node. No output on the command line but from web UI, I can see the task appear in the log and fails with the same reason. Can I simply move a conf file or something from <CURRENT_NODE> to <OLD_NODE>? Tried to...
  5. P

    [SOLVED] Unable to move VM back to failed node after recovery because local storage not available on current node

    I have a PVE 7.2-7 HA cluster with Ceph. Some VMs were originally on local ZFS storage on Node A which was our first node before Ceph was installed. They were migrated to Ceph storage eventually but unfortunately I overlooked that there were some snapshots and unused disks still on Node A local...
  6. P

    VM Shared Disk Between Guests

    Thanks, would you happen to had measured the difference in performance or able to estimate how big a difference did it make?
  7. P

    VM Shared Disk Between Guests

    Hi, did you ever test the performance difference between sharing the disk directly vs going through an intermediate iSCSI host VM?
  8. P

    Same VM on server and multiple simultaneous clients

    If the users are just going to use the browser then you can always drop Windows 10 and use a Linux desktop VM. That should allow multiple users.
  9. P

    Updating network settings in GUI deletes bond second IP

    Thanks, that worked and the interface shows up in ProxMox. Can't remember why I didn't use this form when I first set it up. While it is not recognized and uneditable in the GUI, at least ProxMox does not remove it when updating network configuration.
  10. P

    Updating network settings in GUI deletes bond second IP

    I have a bond interface configured with two network adapters. This is on all my nodes. The bond interface carries two networks 10.1.1.x/24 and 10.1.2.x/24 auto bond0 iface bond0 inet static address 10.1.1.2/24 bond-slaves enp1s0f0np0 enp1s0f1np1 bond-miimon 100...
  11. P

    PVE Cluster - One node `pveproxy` crashing seemingly randomly

    I am encountering pretty much the same problem. Please share how did you resolve this in the end?
  12. P

    Enable HA on all VM's

    The awk syntax doesn't appear to work properly in current Proxmox, it gives me just one particular vmid out of the entire list for some reason. Adjusting it to match 0-9 appears to work properly qm list 2>/dev/null | awk '/\[0-9]+/ {print "vm:", $1, "\n"}' >> /etc/pve/ha/resources.cfg
  13. P

    Turn on "Discard" option?

    write-back is unsafe, the chances of losing data is much higher. This needs to be mentioned before recommending it for performance.
  14. P

    No network after reboot - dependency issue

    I encounter the same problem after a node crash, networking for VMs on the node are not working until migrated to another node. However, I am unable to reinstall ifupdown2 when I tried. Did you not get the following error? apt-get install --reinstall ifupdown2 Reading package lists... Done...
  15. P

    Cephfs - MDS all up:standby, not becoming up:active

    @jw6677 How did you get your mds active in the end?
  16. P

    Where can I find a basic OVF template XML for importing VMs into Proxmox?

    XML from QNAP Virtualization Station also from virsh dumpxml Generates similar error with --dryrun warning: unable to parse the VM name in this OVF manifest, generating a default value { "disks" : [], "qm" : { "cores" : "", "memory" : "" } } Without --dryrun. the error is...
  17. P

    Where can I find a basic OVF template XML for importing VMs into Proxmox?

    XMl from standard KVM virsh dumpxml which produces the following errro with --dryrun warning: unable to parse the VM name in this OVF manifest, generating a default value { "disks" : [], "qm" : { "cores" : "", "memory" : "" } } Without --dryrun warning: unable to parse...
  18. P

    Where can I find a basic OVF template XML for importing VMs into Proxmox?

    I am trying to migrate VMs from various existing platforms into Proxmox and they commonly allow exporting VMs either in OVF or OVA format. Unfortunately, there are various problems when trying to qm importovf because, depending on the platform, they have custom tags which seem to confuse the...
  19. P

    Unable to bond FC interfaces

    I have two 25G fibre interfaces on each server. I want to bond these in active-backup mode. They are connected to different physical switches. There are additional 10G interfaces for users to access VMs and ProxMox itself so these are not part of the usage for the 25G interfaces. I plan to...
  20. P

    Importing OVA assistance

    Thanks for this suggestion. Unfortunately, it has the same problem ( need to untar, unable to parse VM name, invalid host resource /disk/vmdisk1) as mentioned in several old threads from as early as mid 2018. Fortunately, some of the threads had a more complete procedure such as in this...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!