After updating a two node cluster to 4.3, I have rebooted the nodes one by one (not at the same time). After reboot none of the VM's were running, trying to start them on any node gave a cluster error:
root@proxmox2:~# qm start 111
cluster not ready - no quorum?
Checking the cluster showed...
@dietmar is this what you were talking about?
Live disk migration with libvirt blockcopy
https://kashyapc.com/2014/07/06/live-disk-migration-with-libvirt-blockcopy/
According to this document all the facilities to create non-shared storage migration are already included in QEMU / libvirt.
Looks like there is no stopping you, can't even keep your own promise of not offering any more "advice". Dude, seriously:
1. No one asked your opinion on the price of a 10G backbone related to our business. Asking you would be stupid, since you do not posess the facts, the benchmarks, the...
You have no idea what's usual or what's unusual. You have no information about what's an edge case regarding storage decisions of Proxmox users. And how could you? Are you a Proxmox developer? Do you have any hard data or talked to that many different people running Proxmox clusters?
You are...
I understand that, but my original post was also about suspend migration... problem is, unless you use shared or distributed storage, migrating a KVM guest means considerable downtime.
And since QCOW2, LVM and ZFS all support snapshots, I reckon it would be trivial to have at least a two phase...
Could you share some numbers from your extensive testing?
Because the test results I saw point to cache=writeback being the winner, even on ZFS:
http://jrs-s.net/2013/05/17/kvm-io-benchmarking/
I seriously doubt that "most people" use distributed / shared storage, since that requires 10G networking to reach acceptable performance, and it's still on the expensive side. We use bonded 1G interfaces (tried 2x and 3x), and Ceph is sluggish even for backup purposes (both latency and...
According to the LXC 2.0 / LXD documentation, it looks possible to see and use only a limited number of load-balanced CPU cores inside a container.
https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
Hopefully we get to see these features (already available in LXC 2.0 with LXD)...
Well, even if a true live migration is not possible from local storage, an almost live (suspended) migration is badly needed. We have several hundred gigabyte KVM guests, and turning them off for migration causes so much downtime that our users simply cannot accept it.
Since QEMU/QCOW2, LVM...
Upon upgrading our cluster to PVE 4, I just realized that live migration of KVM guests on ZFS local storage (zvol) still does not work. Since vzdump live backups do work (presumably using ZFS snapshots), I wonder why it's not implemented for migration, and when is it expected? Is it on the...
I have an idea for an enhancement of vzdump: when creating a backup job, it would be great to have an option to store the guest's NAME in the backup filename (in addition to the VM ID).
So with the option disabled the filenames would look unchanged:
vzdump-qemu-240-2016_09_02-01_27_32.log...
Your statement is false, and contributes to the stupid but unkillable myth of ZFS using up all your RAM.
Fact 1: ZFS only uses RAM that the system doesn't need. And no, it won't use half of your RAM by default.
Fact 2: ZFS does not use a lot of RAM at all even if there is a lot of free memory...
Upon reading the LXD / LXC 2.0 documentation, I have come across some new CPU limit features on this page:
https://www.stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
According to @dietmar Proxmox can do anything LXD does:
So it looks possible to see and use only a limited number of...
Are there any features of qcow2 in a directory that are not supported by the zvol based raw virtual disk?
Also, can I create directory type storage on ZFS?
I have just installed Proxmox VE 4.2 (on ZFS), and tried to restore a few VMs to it (from NFS). Regardless of the fact that all VMs used qcow2 disk format when they were backed up, Proxmox creates ZFS zvols for them, instead of qcow2 files.
Is there any way to restore VMs directly to qcow2 on...
Thank you for this. Two questions:
- is this supposed to work on Proxmox 3.x or only 4.x?
- is it possible to adapt this for OpenVZ migration (rsync) somehow?
In our experience CFQ performed much worse under high IO load than deadline (on Adaptec HW RAID10). Especially during LVM snapshot backups the whole server slowed to a halt.
Deadline or NOOP are the two best candidates if you are using a RAID controller that has it's own IO queue, so requests...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.