Yes, it is possible using zfs send/receive. Proxmox also has a nice utility to do it:
https://pve.proxmox.com/wiki/PVE-zsync#Sync_a_VM_or_ZFS_dataset_one_time
Hi all.
I have a 3-node Proxmox 5.2 cluster with Ceph Luminous Backend. I use KRBD device as storage.
In some circumstances (after some high load on the server), some KVM machine freezes, independent of the guest OS.
After stopping the machine from interface, RBD device remains mapped. Even...
Hi All.
I confirm that disabling Hardware Offload is a correct solution for this case.
I am running many pfSense instances for a while with this setting and everything is ok.
Update.
IPERF tests from Guest to Host machine shows the following:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 5.79.104.162 port...
1) All advices regarding network performance with virtio are to turn off hardware TSO (TCP segmentation offload).
2) I use e1000 now, but it is also just 1/5 of full performance. Also e1000 is driver with large overhead, Virtio was designed to eliminate it, so it is better to use it, right?
3) I...
Hi all!
I am using the latest Proxmox 4.1 with all updates installed.
I have several VM's with FreeBSD guests and 1 VM with Ubuntu 14 (all KVM).
Host system file download speed: 60 MBps.
FreeBSD guest download speed: 2 MBps on virtio network with TSO enabled, 5-9 MBps with TSO disabled; 12 MBps...
I have the same problem with the recent FreeBSD 10.2 guests.
E1000 driver works well, but also does not deliver full speed.
Downloading file from host system: 60 MBps, from guest: 5-10 MBps depending on TSO on/off.
There is no problem with Ubuntu 14 guest, it gives full speed.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.