Search results

  1. N

    io-error

    After the out-of-disk space issue is resolved (perhaps by deleting snapshots from available VMs) you can do a "qm resume <vmid>" to get the VM running again and flush whatever writes were pending to disk. The VM should resume without issue.
  2. N

    Proxmox V4 bugs

    I run primarily CentOS/RH installations, so all I do is "yum install qemu-guest-agent". You have to check the Proxmox options for the VM to enable the Agent otherwise qemu-guest-agent will exit shortly after boot up of the VM.
  3. N

    Proxmox V4 bugs

    Actually, this is done with the QEMU-GUEST-AGENT in the VM so that it can send a heartbeat back to the QEMU process and reboot it if it stops responding.
  4. N

    Proxmox V4 bugs

    And so there is! Good find. Thanks for the heads up.
  5. N

    Proxmox V4 bugs

    Yes, that's a workaround that is usable. However, if I already manage the VMs in the web-interface, I expect to have the capability there.
  6. N

    Proxmox V4 bugs

    Tom, I concede that I did not have the "restricted" option checked for that group. As I understand it now, the HA groups are affinity groups identifying the preferred hosts to run a particular VM but is not limited to those hosts unless the "restricted" option is checked. I did not know this. My...
  7. N

    Proxmox V4 bugs

    Sigh.... Tom, yes, I did restrict it to just one host. That is the problem and the reason I call it a bug. Do you need a screenshot to prove it?
  8. N

    Proxmox V4 bugs

    I should point out that the VM db1 is now stuck at this point. No way using the web interface to migrate db1 back to host m5. I have to manually copy the 105.conf file from host m6 back to host m5.
  9. N

    Proxmox V4 bugs

    Take a look at this test setup. I have a HA VM called db1 running on host m5. You can see that it is part of an HA group called M5, which only has a single host in it (m5). This means that it should never migrate to any other host. The VM called db1 uses only non-shared storage local-lvm...
  10. N

    Proxmox V4 bugs

    I just wanted to clarify that you need to look at this from a production environment standpoint. I have 30-40 VMs running on a host and I need to restart that host because I am doing repo updates. Why on earth would I want to also shutdown all the VMs and disrupt service? My job is to make sure...
  11. N

    Proxmox V4 bugs

    Thanks for your response, Dietmar. Please check this behaviour with VMWare ESXi with vCenter server. I'm confused why you think this is unexpected behaviour. If the VMs are all running on a host and I shutdown that host, why would I not want to keep these VMs running? I think this is more common...
  12. N

    Proxmox V4 bugs

    Tom, that is why it is called a bug. The HA manager shouldn't migrate it, but it does! Please try this for yourself.
  13. N

    Proxmox V4 bugs

    Thanks for providing the link. I will submit it as a feature request. Concerning your second point, I still consider this a bug because I want the VM to be HA. What if the QEMU process for that VM crashes? What if the VM OS crashes and the QEMU-GUEST is no longer sending heartbeats to the host...
  14. N

    Proxmox V4 bugs

    Thanks for your response, Tom. Whether this is a new feature or not is irrelevant. If I have running VMs on a node that I want to restart (because of repo updates) I expect those VMs to be transferred automatically to other nodes when I click the "restart" button in the web-interface. It is...
  15. N

    Proxmox V4 bugs

    Hello, We've been using Proxmox for years now and really like the latest version. A few bugs that appear annoying: (1) When restarting a node using the web interface in a multi-node cluster, any running non-HA VMs are not automatically live-migrated from that node to any other node. I would...
  16. N

    Multi-node HA cluster with iSCSI only storage

    Thanks for the feedback. This is a crippled solution. I need thin provisioning, linked clones and snapshots and the ability to store ISOs and backups. I'm going to have to look for NFS storage from a NetApp solution instead of Equallogic's PS series solution.
  17. N

    Multi-node HA cluster with iSCSI only storage

    Wonderful. How about the details? What version of Proxmox are you running and how are the images stored?
  18. N

    Multi-node HA cluster with iSCSI only storage

    Hello Tom, Thanks for the feedback. Can you suggest a Wiki that outlines how a multi-node cluster can access iSCSI storage where I don't have any limitations (as with NFS)? I want to be able to run both containers and KVM images and be able to do live snapshots and migrations. Thanks in advance.
  19. N

    Multi-node HA cluster with iSCSI only storage

    What this tells me is that Proxmox with iSCSI as the backend is not "quite there" as a replacement for VMWare ESXi. The methods described in your links all have limitations (can't take snapshots, offline migration only / copying of containers when migrating, etc.) It's a bummer that Proxmox...
  20. N

    Multi-node HA cluster with iSCSI only storage

    Hello, What are my storage options for creating a 3-node Proxmox HA cluster using an Equallogic PS6000X as the storage backend? The PS6000X only supports iSCSI. I tried to enable OCFS2 as well as GFS2 on the nodes but both Debian packages clash with PVE. This leads me to ask what options...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!