VZDUMP stuck - can't kill vzdump task

Alex24

Active Member
Nov 18, 2015
14
2
43
Hello,

I launched yesterday a backup.
The transfer is over but the task is always working.
INFO: status: 100% (88046829568/88046829568), sparse 39% (34989350912), duration 9145, 12/0 MB/s
INFO: transferred 88046 MB in 9145 seconds (9 MB/s)

I try to stop the job from GUI.

I did :
"ps aux | grep vzdump" and killed 2 processes of 3 and i can't kill -9 the last process :
root@proxmox-1-3:/var/lib/vz# ps aux | grep vzdump
root 360203 0.0 0.0 202840 36388 ? Ds Jan20 0:17 task UPID:proxmox-1-3:00057F0B:270FDD6C:569F9539:vzdump::root@pam:

I stopped the VM and did : qm unlock VMID and tried to kill again but i can't cancel this backup job.

How can kill this backup job without restart the hypervisor please ?

Thanks.
 
Code:
root@proxmox-1-X:/var# lsof +p 360203
COMMAND    PID USER   FD   TYPE             DEVICE SIZE/OFF     NODE NAME
task    360203 root  cwd    DIR              253,0     4096   974849 /root
task    360203 root  rtd    DIR              253,0     4096        2 /
task    360203 root  txt    REG              253,0    10456  1918075 /usr/bin/perl
task    360203 root  mem    REG               0,19  1048576 60751215 /run/shm/qb-pve2-event-360203-30-data
task    360203 root  mem    REG               0,19  1048576 60751213 /run/shm/qb-pve2-response-360203-30-data
task    360203 root  mem    REG               0,19  1048576 60751211 /run/shm/qb-pve2-request-360203-30-data
task    360203 root  mem    REG               0,19     8252 60751210 /run/shm/qb-pve2-request-360203-30-header
task    360203 root  mem    REG               0,19  1048576 60751148 /run/shm/qb-pve2-event-360202-29-data
task    360203 root  mem    REG               0,19  1048576 60751146 /run/shm/qb-pve2-response-360202-29-data
task    360203 root  mem    REG               0,19  1048576 60751144 /run/shm/qb-pve2-request-360202-29-data
task    360203 root  mem    REG              253,0    10264  1975845 /usr/lib/perl/5.14.2/auto/Sys/Hostname/Hostname.so
task    360203 root  mem    REG              253,0    18680  1975900 /usr/lib/perl/5.14.2/auto/Digest/MD5/MD5.so
task    360203 root  mem    REG              253,0    22952  1884481 /usr/lib/perl/5.14.2/auto/File/Glob/Glob.so
task    360203 root  mem    REG              253,0    55808  2859266 /lib/x86_64-linux-gnu/libpam.so.0.83.0
task    360203 root  mem    REG              253,0    30184  1950232 /usr/lib/perl5/auto/Authen/PAM/PAM.so
task    360203 root  mem    REG              253,0   396184  1884612 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
task    360203 root  mem    REG              253,0   468936  1951522 /usr/lib/perl5/auto/Net/SSLeay/SSLeay.so
task    360203 root  mem    REG              253,0    38272  1950360 /usr/lib/perl5/auto/Crypt/OpenSSL/RSA/RSA.so
task    360203 root  mem    REG              253,0    39208  1950334 /usr/lib/perl5/auto/Crypt/OpenSSL/Bignum/Bignum.so
task    360203 root  mem    REG              253,0  2048512  1884240 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0
task    360203 root  mem    REG              253,0     9616  1950348 /usr/lib/perl5/auto/Crypt/OpenSSL/Random/Random.so
task    360203 root  mem    REG              253,0    35344  1975881 /usr/lib/perl/5.14.2/auto/Data/Dumper/Dumper.so
...
task    360203 root  mem    REG              253,0  1436984  1886405 /usr/lib/x86_64-linux-gnu/libxml2.so.2.8.0
task    360203 root  mem    REG              253,0   249520  1886523 /usr/lib/librrd.so.4.2.0
task    360203 root  mem    REG              253,0    28080  1958104 /usr/lib/perl5/auto/RRDs/RRDs.so
task    360203 root  mem    REG              253,0   136200  1886511 /usr/lib/libqb.so.0.11.1
task    360203 root  mem    REG              253,0    11224  1982620 /usr/lib/perl5/auto/PVE/IPCC/IPCC.so
task    360203 root  mem    REG              253,0   169992  2859239 /lib/x86_64-linux-gnu/libexpat.so.1.6.0
task    360203 root  mem    REG              253,0    85656  1958885 /usr/lib/perl5/auto/XML/Parser/Expat/Expat.so
task    360203 root  mem    REG              253,0    10328  1884478 /usr/lib/perl/5.14.2/auto/attributes/attributes.so
task    360203 root  mem    REG              253,0    56936  1951026 /usr/lib/perl5/auto/JSON/XS/XS.so
task    360203 root  mem    REG              253,0    11856  1951073 /usr/lib/perl5/auto/Linux/Inotify2/Inotify2.so
task    360203 root  mem    REG              253,0    26984  1884476 /usr/lib/perl/5.14.2/auto/List/Util/Util.so
task    360203 root  mem    REG              253,0    31744  2859789 /lib/x86_64-linux-gnu/librt-2.13.so
task    360203 root  mem    REG              253,0    22784  1975889 /usr/lib/perl/5.14.2/auto/Time/HiRes/HiRes.so
task    360203 root  mem    REG              253,0    51992  1975899 /usr/lib/perl/5.14.2/auto/Digest/SHA/SHA.so
task    360203 root  mem    REG              253,0    14448  1975869 /usr/lib/perl/5.14.2/auto/MIME/Base64/Base64.so
task    360203 root  mem    REG              253,0    14472  1884483 /usr/lib/perl/5.14.2/auto/Cwd/Cwd.so
task    360203 root  mem    REG              253,0    84744  1975846 /usr/lib/perl/5.14.2/auto/Storable/Storable.so
task    360203 root  mem    REG              253,0    18704  1884477 /usr/lib/perl/5.14.2/auto/IO/IO.so
task    360203 root  mem    REG              253,0    39432  1975872 /usr/lib/perl/5.14.2/auto/Encode/Encode.so
task    360203 root  mem    REG              253,0    18672  1975844 /usr/lib/perl/5.14.2/auto/Sys/Syslog/Syslog.so
task    360203 root  mem    REG              253,0    35256  1884472 /usr/lib/perl/5.14.2/auto/Socket/Socket.so
task    360203 root  mem    REG              253,0   109888  1884475 /usr/lib/perl/5.14.2/auto/POSIX/POSIX.so
task    360203 root  mem    REG              253,0    18672  1884482 /usr/lib/perl/5.14.2/auto/Fcntl/Fcntl.so
task    360203 root  mem    REG              253,0   363904  1884479 /usr/lib/perl/5.14.2/auto/re/re.so
task    360203 root  mem    REG              253,0    35104  2859782 /lib/x86_64-linux-gnu/libcrypt-2.13.so
task    360203 root  mem    REG              253,0  1599504  2859775 /lib/x86_64-linux-gnu/libc-2.13.so
task    360203 root  mem    REG              253,0   131107  2859790 /lib/x86_64-linux-gnu/libpthread-2.13.so
task    360203 root  mem    REG              253,0   530736  2859769 /lib/x86_64-linux-gnu/libm-2.13.so
task    360203 root  mem    REG              253,0    14768  2859781 /lib/x86_64-linux-gnu/libdl-2.13.so
task    360203 root  mem    REG              253,0  1574680  1884460 /usr/lib/libperl.so.5.14.2
task    360203 root  mem    REG              253,0   136936  2859787 /lib/x86_64-linux-gnu/ld-2.13.so
task    360203 root  mem    REG               0,19     8248 60751214 /run/shm/qb-pve2-event-360203-30-header
task    360203 root  mem    REG               0,19     8248 60751212 /run/shm/qb-pve2-response-360203-30-header
task    360203 root  mem    REG               0,19     8248 60751147 /run/shm/qb-pve2-event-360202-29-header
task    360203 root  mem    REG              253,0  1534672  1884202 /usr/lib/locale/locale-archive
task    360203 root  mem    REG               0,19     8248 60751145 /run/shm/qb-pve2-response-360202-29-header
task    360203 root  mem    REG               0,19     8252 60751143 /run/shm/qb-pve2-request-360202-29-header
task    360203 root    0r  FIFO                0,8      0t0 60751134 pipe
task    360203 root    1w  FIFO                0,8      0t0 60751149 pipe
task    360203 root    2w  FIFO                0,8      0t0 60751149 pipe
task    360203 root    3r   REG              253,0     4537  1919192 /usr/bin/vzdump
task    360203 root    4wW  REG               0,17        0  7334617 /run/vzdump.lock
task    360203 root    5u  unix 0xffff8802d12e0bc0      0t0 60751141 socket
task    360203 root    6u  unix 0xffff881074a140c0      0t0 60751164 socket
task    360203 root    7w  FIFO                0,8      0t0 60751149 pipe
task    360203 root    8r  FIFO                0,8      0t0 60751150 pipe
task    360203 root    9r   REG              253,0    15068  1951343 /usr/share/perl5/Net/LDAP/Constant.pm
task    360203 root   10w   REG              253,0     8572   139837 /var/log/vzdump/qemu-124.log
task    360203 root   11u  unix 0xffff880376fe3800      0t0 60751208 socket
 
The issue is caused by the backup filesystem/target. I've changed my backup target from ftp/samba/cifs to NFS and all works again without any issue.
 
The issue is caused by the backup filesystem/target. I've changed my backup target from ftp/samba/cifs to NFS and all works again without any issue.
We had the issue with NFS, as the NFS server had a freeze or network issue.

Possible solution is to change NFS mount options from hard to soft like this (in /etc/pve/storage.cfg)
options vers=3,soft,retrans=10
 
It works for me with the "hard" option too. Maybe you've a slow network?

These are the option for my NFS mount point:

rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=x.x.x.x,mountvers=3,mountport=37202,mountproto=udp,local_lock=none,addr=x.x.x.x
 
It works for me with the "hard" option too. Maybe you've a slow network?

These are the option for my NFS mount point:

rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=x.x.x.x,mountvers=3,mountport=37202,mountproto=udp,local_lock=none,addr=x.x.x.x
Network speed is not the problem, the problem with "hard" is, that the kernel waits forever, if i.e. the NFS host is down.
 
After every restore, all VM and even the host GUI stuck.
I tried on a local-lvm (RAID1), on an external RAID5 lvm thin.. always the same..
pve-manager/4.2-2/725d76f0 (running kernel: 4.4.6-1-pve)
Linux prox01 4.4.6-1-pve #1 SMP Thu Apr 21 11:25:40 CEST 2016 x86_64 GNU/Linux

prox01:~# megaclisas-status
-- Controller information --
-- ID | H/W Model | RAM | Temp | BBU | Firmware
c0 | ServeRAID M5014 SAS/SATA Controller | 256MB | N/A | Good | FW: 12.15.0-0199

-- Array information --
-- ID | Type | Size | Strpsz | Flags | DskCache | Status | OS Path | CacheCade |InProgress
c0u0 | RAID-1 | 278G | 128 KB | RA,WB | Default | Optimal | /dev/sda | None |None
c0u1 | RAID-5 | 4087G | 128 KB | RA,WB | Default | Optimal | /dev/sdb | None |None

-- Disk information --
-- ID | Type | Drive Model | Size | Status | Speed | Temp | Slot ID | LSI Device ID
c0u0p0 | HDD | IBM-ESXSST300MM0006 B56MS0K3SK0L0221B5C5 | 278.4 Gb | Online, Spun Up | 6.0Gb/s | 32C | [252:1] | 8
c0u0p1 | HDD | IBM-ESXSST300MM0006 B56MS0K3SJ4D0221B5C5 | 278.4 Gb | Online, Spun Up | 6.0Gb/s | 32C | [252:0] | 9
c0u1p0 | HDD | IBM-ESXSAL13SEB900 SB35Y4I01EH3SB35SB35SB35 | 837.2 Gb | Online, Spun Up | 6.0Gb/s | 34C | [252:4] | 10
c0u1p1 | HDD | IBM-ESXSAL13SEB900 SB35Y4I033H3SB35SB35SB35 | 837.2 Gb | Online, Spun Up | 6.0Gb/s | 35C | [252:5] | 11
c0u1p2 | HDD | IBM-ESXSAL13SEB900 SB35Y4I01XH3SB35SB35SB35 | 837.2 Gb | Online, Spun Up | 6.0Gb/s | 34C | [252:6] | 12
c0u1p3 | HDD | IBM-ESXSAL13SEB900 SB35Y4I03BH3SB35SB35SB35 | 837.2 Gb | Online, Spun Up | 6.0Gb/s | 34C | [252:7] | 13
c0u1p4 | HDD | IBM-ESXSAL13SEB900 SB35Y4I02JH3SB35SB35SB35 | 837.2 Gb | Online, Spun Up | 6.0Gb/s | 35C | [252:3] | 14
c0u1p5 | HDD | IBM-ESXSAL13SEB900 SB35Y4I02VH3SB35SB35SB35 | 837.2 Gb | Online, Spun Up | 6.0Gb/s | 34C | [252:2] | 15

pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 278.34g 15.81g
/dev/sdb1 pve2 lvm2 a-- 4.09t 418.39g

vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 5 0 wz--n- 278.34g 15.81g
pve2 1 14 0 wz--n- 4.09t 418.39g

lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 184.84g 41.34 20.79
root pve -wi-ao---- 69.50g
swap pve -wi-ao---- 8.00g
vm-107-disk-1 pve Vwi-a-tz-- 110.00g data 69.07
vm-107-disk-2 pve Vwi-a-tz-- 30.00g data 1.47
data2 pve2 twi-aotz-- 3.68t 10.15 5.51
snap_vm-113-disk-1_AfterAdmin pve2 Vri---tz-k 40.00g data2 vm-113-disk-1
vm-105-disk-1 pve2 Vwi-a-tz-- 50.00g data2 78.32
vm-105-disk-2 pve2 Vwi-a-tz-- 80.00g data2 74.83
vm-106-disk-1 pve2 Vwi-a-tz-- 70.00g data2 38.60
vm-106-disk-2 pve2 Vwi-a-tz-- 30.00g data2 1.56
vm-113-disk-1 pve2 Vwi-aotz-- 40.00g data2 8.58
vm-625-disk-1 pve2 Vwi-a-tz-- 40.00g data2 27.90
vm-637-disk-1 pve2 Vwi-a-tz-- 40.00g data2 100.00
vm-638-disk-1 pve2 Vwi-a-tz-- 52.00g data2 85.66
vm-724-disk-1 pve2 Vwi-a-tz-- 20.00g data2 40.29
vm-724-disk-2 pve2 Vwi-a-tz-- 100.00g data2 99.62
vm-726-disk-1 pve2 Vwi-a-tz-- 40.00g data2 37.51
vm-990-disk-1 pve2 Vwi-a-tz-- 45.00g data2 74.41
 
It seems the lvm thin stucks when the restore create the vd.. I will try with another type of storage..
[..]
It worked with no stuck!
it is really related to lvm-thin and maybe the particular raid under.

data3 is a "directory" type of storage, on a LVM, a LV not thin.
/dev/mapper/pve2-data3 on /mnt/data3 type ext4 (rw,relatime,data=ordered)

sdb 8:16 0 4.1T 0 disk
└─sdb1 8:17 0 4.1T 0 part
├─pve2-data2_tmeta 251:1 0 120M 0 lvm
│ └─pve2-data2-tpool 251:3 0 3.7T 0 lvm
│ ├─pve2-data2 251:4 0 3.7T 0 lvm
│ ├─pve2-vm--637--disk--1 251:5 0 40G 0 lvm
│ ├─pve2-vm--113--disk--1 251:6 0 40G 0 lvm
│ ├─pve2-vm--990--disk--1 251:7 0 45G 0 lvm
│ ├─pve2-vm--105--disk--1 251:8 0 50G 0 lvm
│ ├─pve2-vm--105--disk--2 251:9 0 80G 0 lvm
│ ├─pve2-vm--106--disk--1 251:10 0 70G 0 lvm
│ ├─pve2-vm--106--disk--2 251:11 0 30G 0 lvm
│ ├─pve2-vm--625--disk--1 251:12 0 40G 0 lvm
│ ├─pve2-vm--638--disk--1 251:13 0 52G 0 lvm
│ ├─pve2-vm--726--disk--1 251:14 0 40G 0 lvm
│ ├─pve2-vm--724--disk--1 251:20 0 20G 0 lvm
│ ├─pve2-vm--724--disk--2 251:21 0 100G 0 lvm
│ ├─pve2-vm--108--disk--1 251:24 0 12G 0 lvm
│ └─pve2-vm--720--disk--1 251:26 0 46G 0 lvm
├─pve2-data2_tdata 251:2 0 3.7T 0 lvm
│ └─pve2-data2-tpool 251:3 0 3.7T 0 lvm
│ ├─pve2-data2 251:4 0 3.7T 0 lvm
│ ├─pve2-vm--637--disk--1 251:5 0 40G 0 lvm
│ ├─pve2-vm--113--disk--1 251:6 0 40G 0 lvm
│ ├─pve2-vm--990--disk--1 251:7 0 45G 0 lvm
│ ├─pve2-vm--105--disk--1 251:8 0 50G 0 lvm
│ ├─pve2-vm--105--disk--2 251:9 0 80G 0 lvm
│ ├─pve2-vm--106--disk--1 251:10 0 70G 0 lvm
│ ├─pve2-vm--106--disk--2 251:11 0 30G 0 lvm
│ ├─pve2-vm--625--disk--1 251:12 0 40G 0 lvm
│ ├─pve2-vm--638--disk--1 251:13 0 52G 0 lvm
│ ├─pve2-vm--726--disk--1 251:14 0 40G 0 lvm
│ ├─pve2-vm--724--disk--1 251:20 0 20G 0 lvm
│ ├─pve2-vm--724--disk--2 251:21 0 100G 0 lvm
│ ├─pve2-vm--108--disk--1 251:24 0 12G 0 lvm
│ └─pve2-vm--720--disk--1 251:26 0 46G 0 lvm
└─pve2-data3 251:25 0 83.7G 0 lvm /mnt/data3


restored there and it did not stuck!
then I moved the disk to the local-lvm or my new storage-lvm and it has been moved with no stuck...

only combination with vzdump and lvm-thin cause problems!

Proxmox Virtual Environment 4.2-2/725d76f0
Virtual Machine 720 ('adc' ) on node '-prox01'

restore vma archive: vma extract -v -r /var/tmp/vzdumptmp50170.fifo /mnt/netapp/dump/vzdump-qemu-107-2016_07_28-16_33_28.vma /var/tmp/vzdumptmp50170
CFG: size: 339 name: qemu-server.conf
DEV: dev_id=1 size: 118111600640 devname: drive-ide0
DEV: dev_id=2 size: 32212254720 devname: drive-virtio0
CTIME: Thu Jul 28 16:33:30 2016
Logical volume "vm-107-disk-1" created.
new volume ID is 'local-lvm:vm-107-disk-1'
map 'drive-ide0' to '/dev/pve/vm-107-disk-1' (write zeros = 0)
Logical volume "vm-107-disk-2" created.
new volume ID is 'local-lvm:vm-107-disk-2'
map 'drive-virtio0' to '/dev/pve/vm-107-disk-2' (write zeros = 0)
progress 1% (read 1503264768 bytes, duration 2 sec)
progress 2% (read 3006529536 bytes, duration 10 sec)
[..]
progress 100% (read 150323855360 bytes, duration 537 sec)
total bytes read 150323855360, sparse bytes 68828028928 (45.8%)
space reduction due to 4K zero blocks 0.0568%
TASK OK

stucks.
 
Last edited:
I had the same problem as @bizzarrone and it took me a lot of time and headache..
Restoring a qcow2 image on lvm-thin hangs and takes forever. That's bad because the default storage on PVE 4.2 is local-lvm (which is thin) and for new VMs the GUI suggests qcow2.
 
It seems the lvm thin stucks when the restore create the vd.. I will try with another type of storage..
[..]
It worked with no stuck!
it is really related to lvm-thin and maybe the particular raid under.
I think you misunderstood me @robhost. I had the same problem as @bizzarrone as the restore process gets stuck if I try to restore a qcow2 kvm backup on a lvm-thin datastore. No share access involved - just local to local.

Other users seem to have the same problem: https://forum.proxmox.com/threads/lvm-thin-and-migrate.26680/
 
Hello,

I launched yesterday a backup.
The transfer is over but the task is always working.
INFO: status: 100% (88046829568/88046829568), sparse 39% (34989350912), duration 9145, 12/0 MB/s
INFO: transferred 88046 MB in 9145 seconds (9 MB/s)

I try to stop the job from GUI.

I did :
"ps aux | grep vzdump" and killed 2 processes of 3 and i can't kill -9 the last process :
root@proxmox-1-3:/var/lib/vz# ps aux | grep vzdump
root 360203 0.0 0.0 202840 36388 ? Ds Jan20 0:17 task UPID:proxmox-1-3:00057F0B:270FDD6C:569F9539:vzdump::root@pam:

I stopped the VM and did : qm unlock VMID and tried to kill again but i can't cancel this backup job.

How can kill this backup job without restart the hypervisor please ?

Thanks.

I have the same problem here. All my Linux VMs are working fine with the Backup, but the one Win Server VM make the same issue like yours. after 100% they stuck and the process cant be killed, it doesnt matter what kill command i try, the vzdump process dont give a s**t. I need to do a hard reset of the Server. Even the restart function in the PVE interface wont work completely.

INFO: starting new backup job: vzdump 105 --storage b002 --remove 0 --compress lzo --mode snapshot --node sv-main
INFO: Starting Backup of VM 105 (qemu)
INFO: Backup started at 2020-01-13 23:58:58
INFO: status = running
INFO: update VM 105: -lock backup
INFO: VM Name: vs-winmain
INFO: include disk 'sata0' 'local-lvm:vm-105-disk-1' 128G
/dev/sdb: open failed: No medium found
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/b002/dump/vzdump-qemu-105-2020_01_13-23_58_58.vma.lzo'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '17c67f33-8b72-4c6f-8ba1-3ff4cc200666'
INFO: status: 0% (737476608/137438953472), sparse 0% (145170432), duration 3, read/write 245/197 MB/s
INFO: status: 1% (1440677888/137438953472), sparse 0% (174292992), duration 14, read/write 63/61 MB/s
INFO: status: 2% (2758148096/137438953472), sparse 0% (190173184), duration 36, read/write 59/59 MB/s
INFO: status: 3% (4130275328/137438953472), sparse 0% (290803712), duration 62, read/write 52/48 MB/s
INFO: status: 4% (5503057920/137438953472), sparse 0% (302788608), duration 86, read/write 57/56 MB/s
INFO: status: 5% (6903627776/137438953472), sparse 0% (311508992), duration 105, read/write 73/73 MB/s
INFO: status: 6% (8253145088/137438953472), sparse 0% (324284416), duration 127, read/write 61/60 MB/s
INFO: status: 7% (9628221440/137438953472), sparse 0% (349765632), duration 152, read/write 55/53 MB/s
INFO: status: 8% (11540168704/137438953472), sparse 0% (1150672896), duration 166, read/write 136/79 MB/s
INFO: status: 9% (12467830784/137438953472), sparse 0% (1208578048), duration 174, read/write 115/108 MB/s
INFO: status: 10% (13881376768/137438953472), sparse 0% (1275293696), duration 188, read/write 100/96 MB/s
INFO: status: 11% (15146287104/137438953472), sparse 0% (1362329600), duration 201, read/write 97/90 MB/s
INFO: status: 12% (16527654912/137438953472), sparse 1% (1398587392), duration 211, read/write 138/134 MB/s
INFO: status: 13% (17881235456/137438953472), sparse 1% (1398833152), duration 219, read/write 169/169 MB/s
INFO: status: 14% (19274399744/137438953472), sparse 1% (1399894016), duration 227, read/write 174/174 MB/s
INFO: status: 15% (20630798336/137438953472), sparse 1% (1399914496), duration 242, read/write 90/90 MB/s
INFO: status: 16% (22027239424/137438953472), sparse 1% (1399926784), duration 263, read/write 66/66 MB/s
INFO: status: 17% (23402774528/137438953472), sparse 1% (1399926784), duration 285, read/write 62/62 MB/s
INFO: status: 18% (24764153856/137438953472), sparse 1% (1457885184), duration 518, read/write 5/5 MB/s
INFO: status: 19% (26172850176/137438953472), sparse 1% (2471567360), duration 541, read/write 61/17 MB/s
INFO: status: 22% (30324293632/137438953472), sparse 4% (6623010816), duration 544, read/write 1383/0 MB/s
INFO: status: 25% (34415575040/137438953472), sparse 7% (10714292224), duration 547, read/write 1363/0 MB/s
INFO: status: 27% (38437650432/137438953472), sparse 10% (14736367616), duration 550, read/write 1340/0 MB/s
INFO: status: 31% (42749263872/137438953472), sparse 13% (19047981056), duration 553, read/write 1437/0 MB/s
INFO: status: 34% (46805483520/137438953472), sparse 16% (23104200704), duration 556, read/write 1352/0 MB/s
INFO: status: 37% (50900566016/137438953472), sparse 19% (27199283200), duration 559, read/write 1365/0 MB/s
INFO: status: 40% (55154835456/137438953472), sparse 22% (31453552640), duration 562, read/write 1418/0 MB/s
INFO: status: 43% (59408908288/137438953472), sparse 25% (35707625472), duration 565, read/write 1418/0 MB/s
INFO: status: 46% (63592202240/137438953472), sparse 29% (39890919424), duration 568, read/write 1394/0 MB/s
INFO: status: 49% (67898638336/137438953472), sparse 32% (44197355520), duration 571, read/write 1435/0 MB/s
INFO: status: 52% (71888338944/137438953472), sparse 35% (48187056128), duration 574, read/write 1329/0 MB/s
INFO: status: 55% (75894030336/137438953472), sparse 37% (52192747520), duration 577, read/write 1335/0 MB/s
INFO: status: 58% (79891660800/137438953472), sparse 40% (56190377984), duration 580, read/write 1332/0 MB/s
INFO: status: 61% (84009353216/137438953472), sparse 43% (60308070400), duration 583, read/write 1372/0 MB/s
INFO: status: 64% (88300781568/137438953472), sparse 47% (64599498752), duration 586, read/write 1430/0 MB/s
INFO: status: 67% (92413624320/137438953472), sparse 49% (68712341504), duration 589, read/write 1370/0 MB/s
INFO: status: 70% (96612319232/137438953472), sparse 53% (72911036416), duration 592, read/write 1399/0 MB/s
INFO: status: 73% (100745216000/137438953472), sparse 56% (77043933184), duration 595, read/write 1377/0 MB/s
INFO: status: 76% (104715780096/137438953472), sparse 58% (81014497280), duration 598, read/write 1323/0 MB/s
INFO: status: 79% (108946718720/137438953472), sparse 62% (85245435904), duration 601, read/write 1410/0 MB/s
INFO: status: 82% (113136369664/137438953472), sparse 65% (89435086848), duration 604, read/write 1396/0 MB/s
INFO: status: 85% (117172273152/137438953472), sparse 68% (93470990336), duration 607, read/write 1345/0 MB/s
INFO: status: 88% (121047351296/137438953472), sparse 70% (97346068480), duration 610, read/write 1291/0 MB/s
INFO: status: 91% (125221404672/137438953472), sparse 73% (101520121856), duration 613, read/write 1391/0 MB/s
INFO: status: 94% (129479278592/137438953472), sparse 76% (105777995776), duration 616, read/write 1419/0 MB/s
INFO: status: 97% (133371789312/137438953472), sparse 79% (109670506496), duration 619, read/write 1297/0 MB/s
INFO: status: 99% (137437839360/137438953472), sparse 82% (113736556544), duration 622, read/write 1355/0 MB/s
INFO: status: 100% (137438953472/137438953472), sparse 82% (113737666560), duration 623, read/write 1/0 MB/s
INFO: transferred 137438 MB in 623 seconds (220 MB/s)



I use a PVE 6.1-5 Host. The Backup Volume is a Windows PC with a SMB Share. I dont know why this the Windows VM wont backup clear.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!