Search results

  1. D

    unable to online migrate to host with local ZFS

    I don't think I was clear then. There are no disks on local storage. If you look at my repro instructions, you can see you don't even *need* a disk! I created a VM with no disk at all, and it fails this way. I did come up with an ugly workaround - create an 8GB nvme pool on /var/foo file and...
  2. D

    unable to online migrate to host with local ZFS

    Okay, 100% reproduceable test case here. Node 1 has a local ZFS pool, Node 2 doesn't. Create a VM with no disk storage at all on Host 1. Offline migrate to Host 2. Works fine. Offline migrate back to Host 1. Fails with the error about the NVME pool.
  3. D

    unable to online migrate to host with local ZFS

    Okay, I worked around this with a horrible hack. Since the nvme pool was totally empty, I removed it from the storage view. Thus, I was able to migrate all guests from pve2 => pve. I then re-added the nvme ZFS storage, and began migrating all of the disks back to it. This sounds like a bug?
  4. D

    unable to online migrate to host with local ZFS

    I did test one emergency method. On host2, do a backup, then stop and remove the guest. Then go to host1, restore that backup and power on. This cannot be the right way though?
  5. D

    unable to online migrate to host with local ZFS

    well, that *highly* sucks. i shut down a guest that is not critical, and i still can't migrate it. so, so far, it looks like all of my guests are stuck on the 2nd node (which is a lower-powered node, so I do not want to leave them there...)
  6. D

    unable to online migrate to host with local ZFS

    I have two hosts, one has a zfs raid1 with two NVME drives. They are both connected to a JBOD via NFS. To upgrade the first host, I did this: move all disks from nvme pool to jbod (shared) pool migrate all guests from host1 to host2 upgrade and reboot host1. Unfortunately, if I try to...
  7. D

    Speedy ZFS Backups

    Sounds interesting, please do update us!
  8. D

    Backup hangs after 'creating archive ...' ?

    Could have sworn I installed qemu guest agent. Apparently not. All good :)
  9. D

    Backup hangs after 'creating archive ...' ?

    Ah, a hint. It finally ran, and I saw: INFO: creating archive '/mnt/pve/Daily_Backup/dump/vzdump-qemu-110-2017_03_25-22_45_21.vma.lzo' ERROR: VM 110 qmp command 'guest-fsfreeze-freeze' failed - unable to connect to VM 110 qga socket - timeout after 35914 retries ERROR: VM 110 qmp command...
  10. D

    Backup hangs after 'creating archive ...' ?

    Proxmox 4.4. I have a Windows Server 2012 R2 VM cloned from a newly created template. It is on a ZFS storage. If I try to back it up, it proceeds to the step where it says 'INFO: creating archive ...', and then sits there. If I stop it, it seems to leave the VM locked, as I can't do anything...
  11. D

    How does vzdump+snapshot work with zfs storage?

    1. Okay, thanks. 2. I wasn't clear. I used the standard zfspool plugin, so the virtual disks are being created as zvols. I didn't intend to refer to qcow2 at all, sorry... I am in fact running raw on the zvols. I was just curious how they get snapshotted for backups.
  12. D

    How does vzdump+snapshot work with zfs storage?

    So I created a storage of type 'ZFS'. I have confirmed that virtual disks created there (raw or qcow2) are being created as zvols. If I manually take a snapshot of such a guest, it creates a ZFS snapshot on that zvol. That makes sense. If I then back up this guest, and specify 'snapshot'...
  13. D

    [SOLVED] POLL -> what backupformat is used? GZIP or LZO?

    Sorry to necro this thread :) I am doing nightly vzdump to a mirrored 2TB pool, so there is plenty of space. I also created an account on rsync.net, so I can monthly send a couple of important VMs to them for offsite backup. I use rsync+ssh for that, and I note that that has a '-z' option...
  14. D

    Proxmox 4.4 performance with ZFS+NVME

    Yes, I meant to mention that. That's fine, from my POV. My point was that the other two hypervisor solutions sucked for reads as well as writes, due to hypervisor I/O stack limitations, as well as being limited by the LAN connection between the hypervisor and storage appliance. I did try...
  15. D

    Proxmox 4.4 performance with ZFS+NVME

    So I got a couple of Samsung 1TB 960 PRO drives. I tried to use them with ESXi and Xenserver, but performance in both cases sucked. I had created a simple mirror using them. In both cases, I tried using a virtual storage appliance and exporting the ZFS datastore via iSCSI or NFS. I was lucky...
  16. D

    ZFS Mounting Problems

    I think I ran into that when I was trying to use zfs with proxmox. The problem (as I recall) is that proxmox insists on populating the directory with the requisite subdirectories, but then the zfs mount fails because the directory is not empty :( I seem to recall working around this by doing a...
  17. D

    Poor performance with ZFS

    If you have a raid controller with writeback cache, this is not surprising.
  18. D

    Quorum Windows !

    If you have such a windows host around, you could install the free starwind iSCSI target SW and use a target there as quorum disk.
  19. D

    Poor performance with ZFS

    The most dangerous aspect of sync=disabled is that the client guest can issue a write barrier if it is doing metadata-related things on that FS, and won't continue until the write is ACK'ed. But here we are lying and saying 'we wrote it!'. It's entirely possible to cause filesystem corruption...