I have a strange issue with the balloon driver on a machine of mine.
With the balloon driver enabled and the service running, the memory usage inside the VM will increase until it gets to the 98% limit that windows has on memory usage. None of the processes inside the VM show any significant...
SSDs can write many TBs before they fail. It's very unlikely, bordering impossible, that the disk that proxmox lives on has written this much data in such a short space of time.
I solved this myself by doing what fireon suggested. On the node that has the NFS shares, I mount them as folders instead. It stops me being able to live migrate but that node is only for data storage anyway so has no running VMs besides a DC.
Part C can be achieved by marking some of the VMs as HA. If the node dies, as long as their storage is shared across all nodes, they'll automatically (I think) be restarted somewhere else. Any non-HA VMs will remain offline.
However, you cannot mark VM A as being unable to run on Host A. For...
All of what you want to do is possible but it involves you writing the code to handle it. The API exposes all the necessary functions to achieve your result.
You seem suitably mad so I'll stop offering advice. The response from a proxmox staff member above said no one was working on it and neither is there much demand for such a feature. Make of that what you will. If your business/customers have high performance requirements yet aren't willing to...
Except your use case is very unusual. You have a cluster yet no shared storage? Could you not repurpose one of your cluster nodes and make that the shared storage?
From a business perspective, it would probably take a good length of time to satisfy your very edge case when solutions are already...
Ceph is distributed storage. Shared storage would be using something like a NAS/SAN and sharing the images over NFS which is a very common thing to do. Then you can live migrate and all it has to move is the RAM.
If you want live migration, redesign your cluster. Local storage isn't designed...
I'm experiencing the following problem.
Node1 - normal node, uses NFS storage for VM images
Node2 - has the NFS storage as local disks, one is a ZFS array, the other is an fstab ext4 disk
Shutting down Node1 while Node2 is running works fine. System powers off.
Shutting down Node2 waits for 90...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.