Search results

  1. F

    Storage migration from Glusterfs

    Sorry I don't quite understand the question, but the problem I mentioned was fixed in later version of pve-qemu-kvm. I don't know/remember the exact version numbers anymore.
  2. F

    Storage migration from Glusterfs

    I noticed pvetest had new pve-qemu-kvm package, tested it and looks like it fixes the error above. Good job devs and I hope glusterfs would get even more love from you. IMO It looks very promising on smaller clusters.
  3. F

    Storage migration from Glusterfs

    Updated to the latest kernel proxmox-ve: 5.1-30 (running kernel: 4.13.8-3-pve) Now vm dies after starting storage migration. create full clone of drive scsi0 (gl_ssd:10303/vm-10303-disk-1.qcow2) drive mirror is starting for drive-scsi0 drive-scsi0: Cancelling block job TASK ERROR: storage...
  4. F

    Storage migration from Glusterfs

    Migrating to local zfs storage, so choosing target format is not option. Source disk is qcow2.
  5. F

    Storage migration from Glusterfs

    Hi, I've been testing gluster as storage backend for my proxmox cluster. Everything looks good except vm images can't be moved from gluster to another storage. Here is the error message: create full clone of drive scsi0 (gl_ssd:10303/vm-10303-disk-1.qcow2) transferred: 0 bytes remaining...
  6. F

    Backup NFS Server

    Here you allow only server2 to access nfs share. You have to include server1 too. And as mentoined no_root_squash is also needed.
  7. F

    Sheepdog disk resize not working

    Yes, that fixes it. Now vm (while running) immediately sees the new size.
  8. F

    Sheepdog disk resize not working

    Hello, I've been testing new sheepdog and it's looking good so far. I noticed a problem though. Disk resize is not working like it should. Resize from web UI - New size won't show up in vm - vdi is resized - Ok after stopping and starting vm - Size in web ui ok Resize using command line - qm...
  9. F

    Snapshots with RAM, disk resize and Ceph

    Ah yes. That was it, thanks. I remember reading that krbd is a bit faster so I turned it on.
  10. F

    Snapshots with RAM, disk resize and Ceph

    Hello, I'm testing ceph on my small home cluster and ran into a problems. 1) Snapshot with RAM doesn't work. I get task error: "VM 10302 qmp command 'savevm-start' failed - failed to open '/dev/rbd/rbd/vm-10302-state-snap2ram'" 2) Resizing a disk of running VM gives another error: "VM 10302...
  11. F

    [Proxmox 4b1] q35 machines failing to reboot, problems with PCI passthrough.

    It is ok. I have debian 8 VM with LSI SAS card passed through booted up and working. Thanks
  12. F

    [Proxmox 4b1] q35 machines failing to reboot, problems with PCI passthrough.

    Wait, I have to take that back. After installing your latest package VM won't boot beyond F12 prompt. I had no "machine: q35" parameter before I made that above post. On the other hand previous qemu package did work because I already had LSI SAS adapter passed through and it worked ok. Could you...
  13. F

    [Proxmox 4b1] q35 machines failing to reboot, problems with PCI passthrough.

    Works now. After installing updated package I was able to remove libjpeg8 package.
  14. F

    [Proxmox 4b1] q35 machines failing to reboot, problems with PCI passthrough.

    I had the same problem and this seems to solve it. VM boots with machine: q35 parameter now. I had another problem with this package though. It depends on libjpeg8, which is not in jessie anymore (some info here https://github.com/hhvm/packaging/issues/96). I used this deb...
  15. F

    "Live" migration with zfs

    Oh, now I think I got it. So qemu can do migration while VM is running...
  16. F

    "Live" migration with zfs

    I was thinking about zfs incremental snapshots. Initial syncing, which could take quite a long time, could be done while VM is still running on source host. VM would be down only during incremental send/receive. Or could this be achieved using, for example, rsync? SR
  17. F

    "Live" migration with zfs

    Hello, Now that we have zfs in proxmox (and just announced zfs sync tool) I've been toying with the idea of migrating VMs using zfs send/receive. Here's the basic idea: 1. Snapshot VM 2. Do initial send to the target host 3. Suspend VM 4. Do final send/receive 5. Transfer VM config 6. Start VM...
  18. F

    Details about the new pve-no-subscripton repository

    I was under impression that the nag screen is gone if you disable the enterprise repo and use only no-enterprise one. Isn't this the case?
  19. F

    Details about the new pve-no-subscripton repository

    Excellent. This and the clarification about the new repos were good news. Thanks for that.
  20. F

    ProxmoxVE will change LICENCE?

    Ok, so this is getting more clear now. Could you then clarify how I can get the source (as a paying customer) of a certain package from git repository? I mean the exact version of the package. So after compiling the package I get the same version of the binary. There are no branches or tags in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!