Hi Wolfgang,
aha, I didn't knew that.
I though it is possible this way:
take a initial snapshot -> zfs send the snapshot -> suspend machine -> take a final snapshot -> zfs send incremental the final snapshot -> unsuspend machine on new node.
Hello @ all,
I am wondering if it is really nececerry to use block-level copying on "qm migrate --online --with-local-disks"?
Because having volumes with 400GB for example and only a couple of gigs are used inside (thin provisioning), takes of course long to copy on block-level instead copying...
Tja was soll ich sagen. Läuft tadellos und ist super einfach einzurichten.
Wenn ich bloß diese zwei Infos vor ein paar Tagen gehabt hätte.
Hab mich ja ziemlich deppert gesucht, von Ubuntu bis FreeBSD, bis ich den genannten Threads gefunden, sowie der dcsapak geantwortet hat und dadurch alles...
Scheinbar kommt hier etwas diesbezüglich, sodass "zfs over iscsi" über LIO (somit auch Debian) möglich ist:
https://forum.proxmox.com/threads/what-open-source-solutions-are-available-to-use-zfs-over-iscsi-with-proxmox.42461/page-2
Eine Idee für ein "distributed storage" mit "thin provisioning"?
NFS Share geht ja nicht, da dort "trim" nicht geht, und somit "this provisioning" nicht wirklich nutzbar ist.
@mailinglists
Hmm... what is the difference between pve-zsync and znapzend? As I understand it will also use the "send" and "receive" snapshots from zfs. So this would cause the same problem, as the live migration would just delete your snapshots.
Hello @ all,
we are experimenting with the zfs live migration and have recognized that the migration to another node, will delete all the existing pve-zsync zfs snapshots. So that on the next backup pve-zsync will fail.
Any idea if there is a option to kepp the snapshots on live migration...
Wolfgang you are totaly right. This seems to really work. THX very much for that!!
Used like that and works like a charm:
ssh <user>@<hostip> -o 'BatchMode=yes' -- zfs send rpool/data/vm-100-disk-0 | pv | ssh recvserver zfs recv rpool/data/vm-100-disk-0
Hi Wolfgang, thx for the answer. I think this will not work, because I forgot to mention something.
I need to run the transfer from the recvserver because it is not reachable directly from the sendserver.
sendserver = pubilcIP
recvserver = only internal ip with access to internet via router
Hello@all, I would like to do a
root@sendserver:~# zfs send rpool/data/vm-100-disk-0 | ssh recvserver zfs recv rpool/data/vm-100-disk-0
But i need to run that from the receiver site, so I tried
root@recserver:~# zfs recv rpool/data/vm-100-disk-0 | ssh sendserver zfs send...
@fireon Hi, hast du eine Lösung gefunden? Ist es nun möglich "ZFS over iscsi" auf einem Debian9 einzurichten?
PS: Hab mir deine Anleitung zu iscsi angesehen, aber die ist ja so wie ich das verstehe für normales iscsi.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.