Problem with backups. Ever growing dat file.

alain

Renowned Member
May 17, 2009
223
2
83
France/Paris
Hi all,

Since a few days, I have a problem with my KVM VM backup on an NFS server. For one or other VM, I see an ever growing dat file, that fill my storage (up to 800 Go the preceeding days).

For example, this morning I see that :
-rw-r--r-- 1 nobody nogroup 1.1K 2010-06-21 03:40 vzdump-qemu-107-2010_06_21-02_01_56.log
-rw-r--r-- 1 nobody nogroup 58G 2010-06-21 03:40 vzdump-qemu-107-2010_06_21-02_01_56.tar
-rw-r--r-- 1 nobody nogroup 1.6K 2010-06-22 18:21 vzdump-qemu-107-2010_06_22-02_01_27.log
-rw-r--r-- 1 nobody nogroup 1.6K 2010-06-23 17:36 vzdump-qemu-107-2010_06_23-02_01_28.log
-rw-r--r-- 1 nobody nogroup 1.7K 2010-06-24 22:23 vzdump-qemu-107-2010_06_24-02_01_57.log
-rw-r--r-- 1 nobody nogroup 232G 2010-06-25 08:37 vzdump-qemu-107-2010_06_25-02_01_58.dat

The dat file for VM 107 is 232 GB, and continuously growing. What I did in the preceeding days is to remove the dat file (otherwise it will end up filling the storage), kill vmtar processes on the host server where the backup is configured, and even last evening rebooted the server, but it is the same this morning.

I am using Proxmox VE 1.5 with kernel 2.32-2. Does anyone could give an advice how to fix this problem ?

Further infomation when this problem occured : I tried to improve the efficiency of my backups, after I added some more VM to backup, as they were taking some time (more than 2 hours each night) to complete. I tried to select 'compress files' in the web interface, but was not satisfied (it took at least as much time), tried to modify the network topology (connecting the NFS server to the same switch at the host server) etc... The problem occured after this change, but I reverted to the previous one with the no result.

Alain
 
Before someone asks this, I looked in syslog and saw only this for VM 107 :
Jun 25 02:01:58 srv-kvm1 vzdump[8631]: INFO: Starting Backup of VM 107 (qemu)

And the vzdump processes I see are these :
# ps aux |grep vzdump
root 8631 0.0 0.0 49392 13668 ? Ss 02:00 0:00 /usr/bin/perl -w /usr/sbin/vzdump --quiet --node 3 --snapshot --storage hertz_backup --mailto xxxx.xxxx@xxxxx.xx 106 107 109 110 112
root 8906 0.0 0.0 8836 1128 ? S 02:01 0:00 sh -c /usr/lib/qemu-server/vmtar '/mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_25-02_01_58.tmp/qemu-server.conf' 'qemu-server.conf' '/mnt/vzsnap0/images/107/vm-107-disk-1.raw' 'vm-disk-scsi0.raw' '/mnt/vzsnap0/images/107/vm-107-disk-3.raw' 'vm-disk-scsi1.raw' |cstream -t 10485760 >/mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_25-02_01_58.dat
root 8907 7.8 0.0 7084 3892 ? S 02:01 33:56 /usr/lib/qemu-server/vmtar /mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_25-02_01_58.tmp/qemu-server.conf qemu-server.conf /mnt/vzsnap0/images/107/vm-107-disk-1.raw vm-disk-scsi0.raw /mnt/vzsnap0/images/107/vm-107-disk-3.raw vm-disk-scsi1.raw

vzdump version is :
vzdump: 1.2-5

Alain
 
I have uploaded a fix for vmtar (it is in package qemu-server_1.1-15_amd64.deb) - just update with:

# apt-get update
# apt-get install

Dose that solve the problem?
 
I think it is apt-get upgrade (not install). I have now :
qemu-server: 1.1-15

To test, as I did not want to wait for next backup this night, I copied a part of /etc/cron.d/vzdump, and executed it on the host :
vzdump --quiet --node 3 --snapshot --storage hertz_backup 107

But what I see presently on the NFS server looks quite the same :
-rw-r--r-- 1 nobody nogroup 58G 2010-06-21 03:40 vzdump-qemu-107-2010_06_21-02_01_56.tar
-rw-r--r-- 1 nobody nogroup 1.6K 2010-06-22 18:21 vzdump-qemu-107-2010_06_22-02_01_27.log
-rw-r--r-- 1 nobody nogroup 1.6K 2010-06-23 17:36 vzdump-qemu-107-2010_06_23-02_01_28.log
-rw-r--r-- 1 nobody nogroup 1.7K 2010-06-24 22:23 vzdump-qemu-107-2010_06_24-02_01_57.log
-rw-r--r-- 1 nobody nogroup 1.6K 2010-06-25 10:48 vzdump-qemu-107-2010_06_25-02_01_58.log
-rw-r--r-- 1 nobody nogroup 8.2G 2010-06-25 14:10 vzdump-qemu-107-2010_06_25-13_55_50.dat
drwxr-xr-x 2 nobody nogroup 29 2010-06-25 13:55 vzdump-qemu-107-2010_06_25-13_55_50.tmp

I don't know if it is normal that it is copying a dat file, as I don't look at the backup process in the night, and if it will end up making a tar file ?
Perhaps I would have to restart some service before executing the vzdump command ?

So, I am not sure the problem is solved...

Alain
 
OK, problem solved. At the end, it produced a tar archive :

# cat vzdump-qemu-107-2010_06_25-13_55_50.log
jun 25 13:55:50 INFO: Starting Backup of VM 107 (qemu)
jun 25 13:55:50 INFO: running
jun 25 13:55:50 INFO: status = running
jun 25 13:55:50 INFO: backup mode: snapshot
jun 25 13:55:50 INFO: bandwidth limit: 10240 KB/s
jun 25 13:55:51 INFO: Logical volume "vzsnap-srv-kvm1-0" created
jun 25 13:55:51 INFO: creating archive '/mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_25-13_55_50.tar'
jun 25 13:55:51 INFO: adding '/mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_25-13_55_50.tmp/qemu-server.conf' to archive ('qemu-server.conf')
jun 25 13:55:51 INFO: adding '/mnt/vzsnap0/images/107/vm-107-disk-1.raw' to archive ('vm-disk-scsi0.raw')
jun 25 14:09:46 INFO: adding '/mnt/vzsnap0/images/107/vm-107-disk-3.raw' to archive ('vm-disk-scsi1.raw')
jun 25 15:33:59 INFO: Total bytes written: 61743682048 (10.00 MiB/s)
jun 25 15:34:26 INFO: archive file size: 57.50GB
jun 25 15:34:26 INFO: delete old backup '/mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_21-02_01_56.tar'
jun 25 15:34:29 INFO: Logical volume "vzsnap-srv-kvm1-0" successfully removed
jun 25 15:34:29 INFO: Finished Backup of VM 107 (01:38:39)

Thanks a lot !

Alain
 
Update : last night, backup of VM 107 seems to have failed a few MB before completion :

-rw-r--r-- 1 nobody nogroup 997 2010-06-26 02:01 vzdump-qemu-106-2010_06_26-02_00_02.log
-rw-r--r-- 1 nobody nogroup 864076288 2010-06-26 02:01 vzdump-qemu-106-2010_06_26-02_00_02.tar
-rw-r--r-- 1 nobody nogroup 1103 2010-06-25 15:34 vzdump-qemu-107-2010_06_25-13_55_50.log
-rw-r--r-- 1 nobody nogroup 61743682048 2010-06-25 15:34 vzdump-qemu-107-2010_06_25-13_55_50.tar
-rw-r--r-- 1 nobody nogroup 61376987136 2010-06-26 03:39 vzdump-qemu-107-2010_06_26-02_01_57.dat
drwxr-xr-x 2 nobody nogroup 29 2010-06-26 02:01 vzdump-qemu-107-2010_06_26-02_01_57.tmp

The dat file does not seem to grow anymore, perhaps the stream is stopped ?

As a result, other backups have not been performed.

On the host, I see :
# ps aux |grep vzdump
root 14504 0.0 0.0 49392 13664 ? Ss 02:00 0:00 /usr/bin/perl -w /usr/sbin/vzdump --quiet --node 3 --snapshot --storage hertz_backup --mailto xxxx.xxxx@xxxx.xx 106 107 109 110 112
root 14766 0.0 0.0 8836 1124 ? S 02:01 0:00 sh -c /usr/lib/qemu-server/vmtar '/mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_26-02_01_57.tmp/qemu-server.conf' 'qemu-server.conf' '/mnt/vzsnap0/images/107/vm-107-disk-1.raw' 'vm-disk-scsi0.raw' '/mnt/vzsnap0/images/107/vm-107-disk-3.raw' 'vm-disk-scsi1.raw' |cstream -t 10485760 >/mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_26-02_01_57.dat
root 14767 79.5 0.0 7084 3888 ? R 02:01 363:21 /usr/lib/qemu-server/vmtar /mnt/pve/hertz_backup/vzdump-qemu-107-2010_06_26-02_01_57.tmp/qemu-server.conf qemu-server.conf /mnt/vzsnap0/images/107/vm-107-disk-1.raw vm-disk-scsi0.raw /mnt/vzsnap0/images/107/vm-107-disk-3.raw vm-disk-scsi1.raw

Is there still something to do to fix this problem ?

Alain
 
what are your pveperf results? and post detailed info about your disk/raid setup.
 
NFS server is a Dell PE2900, and also a cluster node currently (I plan to migrate the VMs there to another server and use it only for backups). The backup storage itself is 4 1 TB SATA drives in Raid 5.

Proxmox on this node is installed itself on a Raid 10 volume :
hertz:/backup/vm# pveperf
CPU BOGOMIPS: 12712.69
REGEX/SECOND: 532189
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 146.92 MB/sec
AVERAGE SEEK TIME: 10.12 ms
FSYNCS/SECOND: 2472.45
DNS EXT: 32.06 ms

The master node, from which I do the backup is a Dell PE R710, with 6 Near Line SATA 500 GB drives in raid 10 :
srv-kvm1:~# pveperf
CPU BOGOMIPS: 72351.86
REGEX/SECOND: 862080
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 293.95 MB/sec
AVERAGE SEEK TIME: 7.76 ms
FSYNCS/SECOND: 2717.89
DNS EXT: 33.45 ms
DNS INT: 2.30 ms (xxx.xxxx.xx)

Alain
 
these numbers looks well.
 
Yes, it is a compromise between performance and price. You could achieve better performances with fast SAS drives, but with less storage available and a higher cost. With a good raid controller (Perc 5/i and Perc 6/i in these cases), and SATA or Near Line SAS (SATA with SAS interface), in Raid 10, you obtain rather good results, better capacity and reasonable cost (and I think it is rather secure).

I don't think the problem is here.

Is it normal that the backup takes so long ? For other backups, it takes 1 minute or so. I would imagine that you use rsync within vzdump, and you copy only the differences between raw image files and last backup ?

Alain
 
This evening, at 19h42 PM, I killed the vzdump process on the host. In the log file file I see this information :
Jun 26 02:15:53 INFO: adding '/mnt/vzsnap0/images/107/vm-107-disk-3.raw' to archive ('vm-disk-scsi1.raw')
Jun 26 19:42:49 INFO: received signal - terminate process

So it began to add disk-3.raw at 2 am to the backup archive, and dit not finished at 19h42 PM.
The raw file is big, 50 GB. I wonder if there could be a problem with NFS. I manually backuped remaining VMs manually successfully, but raw files are not that big...

Alain
 
This night, all backups went fine. Have to see next night...

What I noticed doing the backups manually for each VM, is that it takes a very long time to add each raw file to the backup archive (about 1 hour for a 60 GB file). At the the end, it replaces the old archive. So rsync is not used in this process.

The backups took almost 3 hours this night. It is annoying as I would like to add more VMs to backups, and big ones (more than 100 GB...).

I also noticed that bandwidth is limited to 10 MB, which I understand, but wonder if it could be increased, as I have a GB network.

At the end, I would prefer to have backups done with rsync, as it would take much less time to complete. I think it was the case (it was taking a minute or two) with single raw files, or qcow2 files...

Alain
 
This night, all backups went fine. Have to see next night...

What I noticed doing the backups manually for each VM, is that it takes a very long time to add each raw file to the backup archive (about 1 hour for a 60 GB file). At the the end, it replaces the old archive. So rsync is not used in this process.

The backups took almost 3 hours this night. It is annoying as I would like to add more VMs to backups, and big ones (more than 100 GB...).

I also noticed that bandwidth is limited to 10 MB, which I understand, but wonder if it could be increased, as I have a GB network.

just define a custom, create a vzdump.conf file with the needed settings (use forum search and read the man pages of vzdump).

At the end, I would prefer to have backups done with rsync, as it would take much less time to complete. I think it was the case (it was taking a minute or two) with single raw files, or qcow2 files...

Alain

if you want backups with rsync, use rsync. vzdump is probably not the right tool for you, maybe a tool like backuppc is better suited for your requirements.
 
just define a custom, create a vzdump.conf file with the needed settings (use forum search and read the man pages of vzdump).

Thanks for the hint, I will look at that.

if you want backups with rsync, use rsync. vzdump is probably not the right tool for you, maybe a tool like backuppc is better suited for your requirements.
Yes, I can perhaps set up my own backups using rsync, even if they are not handled via web GUI. I don't know if backuppc is the right tool for that.

Good news is that all backups went fine also this night, so we can consider the original problem is solved. The problem I had a few days ago with the dat file not being completed and converted in a tar archive was perhaps due to a NFS/Network problem ?

Thanks for your help.

Alain
 
I have uploaded a fix for vmtar (it is in package qemu-server_1.1-15_amd64.deb) - just update with:

# apt-get update
# apt-get install

Dose that solve the problem?

Hi Dietmar,

after this update one of my vm backup fail. This is the only VM with 2 RAW Disks.
Backup of my other VM are running fine (Linux VM and Windows 7 VM)

VM:
Windows SBS 2008 R2 with two RAW Drives (60GB and 200GB).
Storage is NFS Drive on a Windows Server.

backup log:
Code:
Jun 28 12:47:42 INFO: Starting Backup of VM 102 (qemu)
Jun 28 12:47:42 INFO: running
Jun 28 12:47:42 INFO: status = running
Jun 28 12:47:43 INFO: backup mode: snapshot
Jun 28 12:47:43 INFO: bandwidth limit: 20000 KB/s
Jun 28 12:47:43 INFO:   Logical volume "vzsnap-BigOne-0" created
Jun 28 12:47:43 INFO: creating archive '/mnt/pve/VMBackup/vzdump-qemu-102-2010_06_28-12_47_42.tgz'
Jun 28 12:47:43 INFO: adding '/mnt/temp/vzdumptmp7682/qemu-server.conf' to archive ('qemu-server.conf')
Jun 28 12:47:43 INFO: adding '/mnt/vzsnap0/images/102/vm-102-disk-1.raw' to archive ('vm-disk-virtio0.raw')
Jun 28 13:38:11 INFO: adding '/mnt/vzsnap0/images/102/vm-102-disk-2.raw' to archive ('vm-disk-virtio1.raw')
Jun 28 13:55:50 INFO:   Logical volume "vzsnap-BigOne-0" successfully removed
Jun 28 13:55:50 ERROR: Backup of VM 102 failed - interrupted by signal

pveversion
Code:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.32-2-pve
proxmox-ve-2.6.32: 1.5-7
pve-kernel-2.6.24-8-pve: 2.6.24-16
pve-kernel-2.6.32-1-pve: 2.6.32-4
pve-kernel-2.6.32-2-pve: 2.6.32-7
qemu-server: 1.1-15
pve-firmware: 1.0-5
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-1pve11
vzdump: 1.2-5
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.4-1
ksm-control-daemon: 1.0-3

pveperf
Code:
CPU BOGOMIPS:      21333.96
REGEX/SECOND:      888157
HD SIZE:           49.22 GB (/dev/mapper/pve-root)
BUFFERED READS:    207.08 MB/sec
AVERAGE SEEK TIME: 7.96 ms
FSYNCS/SECOND:     3135.45
DNS EXT:           1055.52 ms
DNS INT:           1045.24 ms (local)

vzdump.conf
Code:
dumpdir: /mnt/pve/VMBackup
tmpdir: /mnt/temp
mode: snapshot
bwlimit: 20000
maxfiles: 52

How can I see more detail what might cause this problem?

kind regards
B.
 
That error message looks quite strange:

Code:
Jun 28 13:55:50 ERROR: Backup of VM 102 failed - interrupted by signal

Especially because the interrupt is received after creating the archive.
Please test again - is that error reproducible?
 
That error message looks quite strange:

Code:
Jun 28 13:55:50 ERROR: Backup of VM 102 failed - interrupted by signal
Especially because the interrupt is received after creating the archive.
Please test again - is that error reproducible?

Thx Dietmar,

It is reproducible. I tried it yesterday many times but received always the same error. Even rebootet the complete server.
I usually use the following command in terminal:
vzdump --compress <VMID>

I will run some more test and will report back.
In the mean time - how can I backup this VM without vzdump? (It is my prod system and I do not have a valid backup right now)
Just copy the RAW files and the conf file?

Kind regards
 
Last edited:
Hi Dietmar,

This is the output from syslog.
This is the backup from my Windows 7 VM (40GB RAW Disk) without the --compress option.


Code:
JJun 29 11:43:22  vzdump 24575  INFO: starting new backup job: vzdump 105 
Jun 29 11:43:22  vzdump 24575  INFO: Starting Backup of VM 105 (qemu) 
Jun 29 11:43:22  kernel   kjournald starting. Commit interval 5 seconds 
Jun 29 11:43:22  kernel   EXT3 FS on dm-3, internal journal 
Jun 29 11:43:22  kernel   EXT3-fs: mounted filesystem with ordered data mode. 
Jun 29 11:43:33  pvedaemon 24424  WARNING: Cannot encode 'meminfo' element as 'hash'. Will be encoded as 'map' instead 
Jun 29 11:43:37  proxwww 24629  Starting new child 24629 
Jun 29 11:43:49  proxwww 24632  Starting new child 24632 
Jun 29 11:44:59  proxwww 24701  Starting new child 24701 
Jun 29 11:45:11  proxwww 24710  Starting new child 24710 
Jun 29 11:46:30  proxwww 24791  Starting new child 24791 
Jun 29 11:46:36  proxwww 24797  Starting new child 24797 
Jun 29 11:48:03  proxwww 24867  Starting new child 24867 
Jun 29 11:48:04  proxwww 24869  Starting new child 24869 
Jun 29 11:49:30  proxwww 24950  Starting new child 24950 
Jun 29 11:49:31  proxwww 24953  Starting new child 24953 
Jun 29 11:49:31  proxwww 24954  Starting new child 24954 
Jun 29 11:49:40  kernel   INFO: task kswapd0:52 blocked for more than 120 seconds. 
Jun 29 11:49:40  kernel   "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
Jun 29 11:49:40  kernel   kswapd0 D 0000000000000000 0 52 2 0x00000000 
Jun 29 11:49:40  kernel   ffff88021541d750 0000000000000046 0000000000000000 0000000000000000 
Jun 29 11:49:40  kernel   0000000000000004 ffff88021347b000 ffff8802136afc00 0000000000000010 
Jun 29 11:49:40  kernel   ffff88021541d700 000000000000fb08 ffff88021541dfd8 ffff88021651dbc0 
Jun 29 11:49:40  kernel   Call Trace: 
Jun 29 11:49:40  kernel   [<ffffffff8144290a>] ? __map_bio+0xda/0x150 
Jun 29 11:49:40  kernel   [<ffffffff8156de02>] io_schedule+0x52/0x70 
Jun 29 11:49:40  kernel   [<ffffffffa02e949e>] nfs_wait_bit_uninterruptible+0xe/0x20 [nfs] 
Jun 29 11:49:40  kernel   [<ffffffff8156e662>] __wait_on_bit+0x62/0x90 
Jun 29 11:49:40  kernel   [<ffffffffa02e9490>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs] 
Jun 29 11:49:40  kernel   [<ffffffffa02e9490>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs] 
Jun 29 11:49:40  kernel   [<ffffffff8156e709>] out_of_line_wait_on_bit+0x79/0x90 
Jun 29 11:49:40  kernel   [<ffffffff81085be0>] ? wake_bit_function+0x0/0x50 
Jun 29 11:49:40  kernel   [<ffffffffa02e947f>] nfs_wait_on_request+0x2f/0x40 [nfs] 
Jun 29 11:49:40  kernel   [<ffffffffa02eea93>] nfs_sync_mapping_wait+0x113/0x260 [nfs] 
Jun 29 11:49:40  kernel   [<ffffffffa02eec6b>] nfs_wb_page+0x8b/0xf0 [nfs] 
Jun 29 11:49:40  kernel   [<ffffffffa02ddb00>] nfs_release_page+0x60/0x80 [nfs] 
Jun 29 11:49:40  kernel   [<ffffffff810f3592>] try_to_release_page+0x32/0x60 
Jun 29 11:49:40  kernel   [<ffffffff81101b4d>] shrink_page_list+0x57d/0x840 
Jun 29 11:49:40  kernel   [<ffffffff8113d2d3>] ? mem_cgroup_del_lru_list+0x23/0xb0 
Jun 29 11:49:40  kernel   [<ffffffff8113d3d9>] ? mem_cgroup_del_lru+0x39/0x40 
Jun 29 11:49:40  kernel   [<ffffffff811010a8>] ? isolate_pages_global+0x198/0x290 
Jun 29 11:49:40  kernel   [<ffffffff8110247b>] shrink_list+0x2fb/0x8d0 
Jun 29 11:49:40  kernel   [<ffffffff81102dfa>] shrink_zone+0x3aa/0x550 
Jun 29 11:49:40  kernel   [<ffffffff81103cbd>] kswapd+0x70d/0x800 
Jun 29 11:49:40  kernel   [<ffffffff81100f10>] ? isolate_pages_global+0x0/0x290 
Jun 29 11:49:40  kernel   [<ffffffff81085ba0>] ? autoremove_wake_function+0x0/0x40 
Jun 29 11:49:40  kernel   [<ffffffff811035b0>] ? kswapd+0x0/0x800 
Jun 29 11:49:40  kernel   [<ffffffff811035b0>] ? kswapd+0x0/0x800 
Jun 29 11:49:40  kernel   [<ffffffff810857f6>] kthread+0x96/0xb0 
Jun 29 11:49:40  kernel   [<ffffffff8101422a>] child_rip+0xa/0x20 
Jun 29 11:49:40  kernel   [<ffffffff81085760>] ? kthread+0x0/0xb0 
Jun 29 11:49:40  kernel   [<ffffffff81014220>] ? child_rip+0x0/0x20 
Jun 29 11:50:01  cron 24983  (root) CMD (test -x /usr/lib/atsar/atsa1 && /usr/lib/atsar/atsa1) 
Jun 29 11:50:56  proxwww 25038  Starting new child 25038 
Jun 29 11:51:01  proxwww 25043  Starting new child 25043 
Jun 29 11:53:58  proxwww 25138  Starting new child 25138 
Jun 29 11:54:36  proxwww 25145  Starting new child 25145 
Jun 29 11:57:17  proxwww 25234  Starting new child 25234 
Jun 29 11:57:20  proxwww 25238  Starting new child 25238 
Jun 29 11:57:26  proxwww 25244  Starting new child 25244 
Jun 29 11:59:16  proxwww 25321  Starting new child 25321 
Jun 29 12:00:01  cron 25354  (root) CMD (test -x /usr/lib/atsar/atsa1 && /usr/lib/atsar/atsa1) 
Jun 29 12:00:02  proxwww 25361  Starting new child 25361 
Jun 29 12:00:41  proxwww 25399  Starting new child 25399 
Jun 29 12:01:26  proxwww 25450  Starting new child 25450 
Jun 29 12:03:00  proxwww 25498  Starting new child 25498 
Jun 29 12:04:33  proxwww 25549  Starting new child 25549 
Jun 29 12:06:58  proxwww 25605  Starting new child 25605 
Jun 29 12:08:05  vzdump 24575  INFO: Finished Backup of VM 105 (00:24:43) 
Jun 29 12:08:05  vzdump 24575  INFO: Backup job finished successfuly 
Jun 29 12:08:14  proxwww 25668  Starting new child 25668 
Jun 29 12:09:06  proxwww 25716  Starting new child 25716 
Jun 29 12:09:34  proxwww 25739  Starting new child 25739 
Jun 29 12:10:01  cron 25785  (root) CMD (test -x /usr/lib/atsar/atsa1 && /usr/lib/atsar/atsa1) 
Jun 29 12:10:30  proxwww 25815  Starting new child 25815 
Jun 29 12:10:58  proxwww 25846  Starting new child 25846
This is the output from my SBS2008 VM. I noticed that my VM crashed and restartet during backup. VM was not accessible anymore - had to stop the VM and restart.

See attached Syslog output

I keep testing...
 

Attachments

  • Syslog Output.txt
    7.7 KB · Views: 0

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!