Sorry I don't quite understand the question, but the problem I mentioned was fixed in later version of pve-qemu-kvm. I don't know/remember the exact version numbers anymore.
I noticed pvetest had new pve-qemu-kvm package, tested it and looks like it fixes the error above.
Good job devs and I hope glusterfs would get even more love from you. IMO It looks very promising on smaller clusters.
Updated to the latest kernel
proxmox-ve: 5.1-30 (running kernel: 4.13.8-3-pve)
Now vm dies after starting storage migration.
create full clone of drive scsi0 (gl_ssd:10303/vm-10303-disk-1.qcow2)
drive mirror is starting for drive-scsi0
drive-scsi0: Cancelling block job
TASK ERROR: storage...
Hi,
I've been testing gluster as storage backend for my proxmox cluster. Everything looks good except vm images can't be moved from gluster to another storage.
Here is the error message:
create full clone of drive scsi0 (gl_ssd:10303/vm-10303-disk-1.qcow2)
transferred: 0 bytes remaining...
Hello,
I've been testing new sheepdog and it's looking good so far. I noticed a problem though. Disk resize is not working like it should.
Resize from web UI
- New size won't show up in vm
- vdi is resized
- Ok after stopping and starting vm
- Size in web ui ok
Resize using command line
- qm...
Hello,
I'm testing ceph on my small home cluster and ran into a problems.
1) Snapshot with RAM doesn't work. I get task error: "VM 10302 qmp command 'savevm-start' failed - failed to open '/dev/rbd/rbd/vm-10302-state-snap2ram'"
2) Resizing a disk of running VM gives another error: "VM 10302...
Wait, I have to take that back. After installing your latest package VM won't boot beyond F12 prompt. I had no "machine: q35" parameter before I made that above post. On the other hand previous qemu package did work because I already had LSI SAS adapter passed through and it worked ok. Could you...
I had the same problem and this seems to solve it. VM boots with machine: q35 parameter now.
I had another problem with this package though. It depends on libjpeg8, which is not in jessie anymore (some info here https://github.com/hhvm/packaging/issues/96). I used this deb...
I was thinking about zfs incremental snapshots. Initial syncing, which could take quite a long time, could be done while VM is still running on source host. VM would be down only during incremental send/receive. Or could this be achieved using, for example, rsync?
SR
Hello,
Now that we have zfs in proxmox (and just announced zfs sync tool) I've been toying with the idea of migrating VMs using zfs send/receive.
Here's the basic idea:
1. Snapshot VM
2. Do initial send to the target host
3. Suspend VM
4. Do final send/receive
5. Transfer VM config
6. Start VM...
Ok, so this is getting more clear now. Could you then clarify how I can get the source (as a paying customer) of a certain package from git repository? I mean the exact version of the package. So after compiling the package I get the same version of the binary. There are no branches or tags in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.