backup error with Input/output error

Discussion in 'Proxmox VE: Installation and configuration' started by informant, Mar 3, 2013.

  1. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi, if we start a backup on this ct we get following errors on backup:

    Code:
    INFO: starting new backup job: vzdump 5132 --remove 0 --mode snapshot --compress lzo --storage temp --node phoenix
    INFO: Starting Backup of VM 5132 (openvz)
    INFO: CTID 5132 exist mounted running
    INFO: status = running
    INFO: backup mode: snapshot
    INFO: ionice priority: 7
    INFO: trying to remove stale snapshot '/dev/pve/vzsnap-phoenix-0'
    INFO: umount: /mnt/vzsnap0: not mounted
    ERROR: command 'umount /mnt/vzsnap0' failed: exit code 1
    INFO: /dev/pve/vzsnap-phoenix-0: read failed after 0 of 4096 at 4846774059008: Input/output error
    INFO: /dev/pve/vzsnap-phoenix-0: read failed after 0 of 4096 at 4846774116352: Input/output error
    INFO: /dev/pve/vzsnap-phoenix-0: read failed after 0 of 4096 at 0: Input/output error
    INFO: /dev/pve/vzsnap-phoenix-0: read failed after 0 of 4096 at 4096: Input/output error
    INFO: Logical volume "vzsnap-phoenix-0" successfully removed
    INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-phoenix-0')
    INFO: Logical volume "vzsnap-phoenix-0" created
    INFO: creating archive '/mnt/pve/temp/dump/vzdump-openvz-5132-2013_03_02-19_50_00.tar.lzo'
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940_1cb17af5dd9ad44.avi: File shrank by 271046656 bytes; padding with zeros
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940thm_wm_b79ad9bc3c3d2be.jpg: Read error at byte 0, while reading 3254 bytes: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940thm_83e3d6731f55d71.jpg: Read error at byte 0, while reading 3254 bytes: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940pre_wm_e23eeccba1f0884.jpg: Read error at byte 0, while reading 10240 bytes: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940pre_4bd214234ea59ff.jpg: Read error at byte 0, while reading 8192 bytes: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940col_841c4fec2cbf7ee.jpg: Read error at byte 0, while reading 1475 bytes: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940col_wm_4d45cc711f32ed0.jpg: Read error at byte 0, while reading 1475 bytes: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940pre_4bd214234ea59ff.flv: Read error at byte 0, while reading 3072 bytes: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/0_dc8a8dbfd082887/940_1cb17af5dd9ad44.jpg: Read error at byte 0, while reading 8192 bytes: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/metadump.xml: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944thm_4421d71605a255a.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944pre_c0239a085844c7c.flv: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944pre_c0239a085844c7c.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944col_fc4d56896a0971e.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944pre_wm_f29ede4b5514eef.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944_3ae19d743b255de.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944col_wm_3c851660037f328.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944_3ae19d743b255de.avi: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/4/4_5d66f8c28e12e6d/944thm_wm_9011163fb07213d.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/3_54c0a5b7ff58935/: Cannot savedir: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/3_54c0a5b7ff58935: Cannot close: Bad file descriptor
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/3_54c0a5b7ff58935/93col_wm_e4b318eaecfba34.jpg: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/3_54c0a5b7ff58935/93pre_d90a0ed06cb7e6a.flv: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/3_54c0a5b7ff58935/93pre_wm_006577719bf309a.jpg: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/3_54c0a5b7ff58935/93thm_wm_89d7bf0218ba186.jpg: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_4_76d3b1b2e5ae26c.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_3_wm_8cf57289697ffae.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/metadump.xml: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_3_314e893428ba8b0.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999col_wm_d156374bf7c8fb5.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_7_b697dbd14e4f0a8.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_a8264436127e941.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_6_wm_b38f7c1f56c901a.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999thm_44c9f5785830c45.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_wm_59299e3f5300c05.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_2_b282adefc6eed56.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999col_9c8abff0c4bfb79.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_5_wm_e8d62810ffd90ce.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_5_2791021937c5225.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999_e240761a5beba8a.pdf: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_2_wm_5a6e36de2a9117f.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_4_wm_7076f6e1e974dd7.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999_e240761a5beba8a.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_6_48109bb59b96850.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999pre_b8d585468d285f6.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999pre_wm_8d1c08086385610.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999scr_7_wm_e39f2d7c39e2c4b.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/9_15b844db2c9a489/999thm_wm_e8d77f559e5c33c.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/994scr_wm_2f935acad0f5755.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/994thm_36561e500cb64e2.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/994thm_wm_5e29ab4c2cc5b92.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/994pre_wm_ecc850dff62a49a.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/994scr_16e639ab99f187b.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/metadump.xml: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/994scr_6_wm_35b75e19cf037ae.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/994scr_3_0b171c3652977a4.jpg: Cannot stat: Input/output error
    INFO: tar: ./var/www/cbm1/htdocs/mediathek/filestore/9/9/4_b9cb97182da5dfa/994pre_262f80b63d1f615.jpg: Cannot stat: Input/output error
    ...
    INFO: tar: ./var/www/cbm23/htdocs/_gal/import/2012/Drift Days Hockenheim/IMG_9085 (Kopie).JPG: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm23/htdocs/_gal/import/2012/Drift Days Hockenheim/IMG_0465 (Kopie).JPG: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm23/htdocs/_gal/import/2012/Drift Days Hockenheim/IMG_9733 (Kopie).JPG: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm23/htdocs/_gal/import/2012/Drift Days Hockenheim/IMG_1775 (Kopie).JPG: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm23/htdocs/_gal/import/2012/Drift Days Hockenheim/IMG_1138 (Kopie).JPG: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm23/htdocs/_gal/import/2012/Drift Days Hockenheim/IMG_8544 (Kopie).JPG: Cannot stat: No such file or directory
    INFO: tar: ./var/www/cbm23/htdocs/_gal/import/2012/Drift Days Hockenheim/IMG_8947 (Kopie).JPG: Cannot stat: No such file or directory
    ...
    INFO: tar: ./dev/.udev: Cannot close: Bad file descriptor
    INFO: tar: ./dev/.udev/db/: Cannot savedir: Input/output error
    INFO: tar: ./dev/.udev/db: Cannot close: Bad file descriptor
    INFO: tar: ./dev/.udev/db/net\:eth0: Read error at byte 0, while reading 163 bytes: Input/output error
    INFO: tar: ./dev/.udev/db/net\:eth1: Read error at byte 0, while reading 163 bytes: Input/output error
    INFO: tar: ./dev/.udev/db/net\:eth2: Read error at byte 0, while reading 162 bytes: Input/output error
    INFO: tar: ./dev/pts/: Cannot savedir: Input/output error
    INFO: tar: ./dev/pts: Cannot close: Bad file descriptor
    INFO: Total bytes written: 139820072960 (131GiB, 23MiB/s)
    INFO: tar: Exiting with failure status due to previous errors
    ERROR: Backup of VM 5132 failed - command '(cd /mnt/vzsnap0/private/5132;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|lzop) >/mnt/pve/temp/dump/vzdump-openvz-5132-2013_03_02-19_50_00.tar.dat' failed: exit code 2
    INFO: Backup job finished with errors
    TASK ERROR: job errors
    What can we do here. Please help. Thanks

    regards
     
  2. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,835
    Likes Received:
    158
    Hi,
    there is an old snapshot.
    Look with
    Code:
    vgs
    lvs
    
    I guess you don't have enough free space in the volumegroup?

    Remove all snapshots and try the backup again (if you have free space in the VG).

    Udo
     
  3. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi udo,

    freepsace are available 13 TB on storage and no old snapshots are available.
    it´s on this moment only on one ct with 300 gb and many small files.

    Code:
    root@srv:~# vgs
      VG   #PV #LV #SN Attr   VSize VFree
      pve    1   4   1 wz--n- 4,55t 15,00g
    root@srv:~# lvs
      LV               VG   Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
      data             pve  owi-aos-  4,41t
      root             pve  -wi-ao-- 96,00g
      swap             pve  -wi-ao-- 31,00g
      vzsnap-srv-0 pve  swi-aos-  1,00g      data    95,76
    
    
    regards
     
  4. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,469
    Likes Received:
    395
    you have 15 GB free space, but the backup takes just 1 GB by default.

    you can set the default in /etc/vzdump.conf (see 'man vzdump')
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi tom,

    thanks for your information, but i don´t understand what you mean.

    The node and teh storage have many free space

    Code:
     df -h
    Dateisystem           Size  Used Avail Use% Eingehängt auf
    /dev/mapper/pve-root   95G   26G   64G  29% /
    tmpfs                  31G     0   31G   0% /lib/init/rw
    udev                   31G  224K   31G   1% /dev
    tmpfs                  31G   19M   31G   1% /dev/shm
    /dev/mapper/pve-data  4,4T  356G  4,0T   8% /var/lib/vz
    /dev/cciss/c0d0p2     494M   78M  391M  17% /boot
    /dev/fuse              30M   16K   30M   1% /etc/pve
    10.11.12.50:/volume1/storage
                           14T  218G   14T   2% /mnt/pve/storage
    
    What you mean with 15 GB free space? Do you mean free space of ct?
    And what you mean with 1 GB for backup by default?

    I hope, you can answer me with more informations for understanding. Very thanks.


    regards
     
  6. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,469
    Likes Received:
    395
    I suggest you take a look on the LVM basics, see http://tldp.org/HOWTO/LVM-HOWTO/

    you posted:
    __

    VG #PV #LV #SN Attr VSize VFree pve 1 4 1 wz--n- 4,55t 15,00g

    that means, you have 15 GB free space in your volumegroup. default, our backup snapshot create a snapshot volume with the size of 1 GB. as you have 15 GB free, this is no problem. the backup starts and all writes inside your CT will be written to the snapshot volume, 1 GB. but if your CT write more than 1 GB during the backup time, your backup fails.

    solutions: set a higher value, something between 1 GB and 15 GB.

    also search the forum, there a lot of posting, addressing exactly the same questions.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Ah Ok, i understand, if the container of users ct get more than default 1 gb on her ct space in the time of a snapshot, the snapshot stop and make the errors. and i can change the default size 1 gb bigger.

    Very thanks.
     
  8. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi tom,

    i have a question to this entry:

    i have set following entrys in this file:

    Code:
    bwlimit: 0
    size: 4096
    The option with bwlimit works but do i must restart a service ... if i have change this file, well if i start a backup in proxmox, the speed is slow. look log:

    Code:
    INFO: starting new backup job: vzdump 2222 --remove 0 --mode snapshot --compress lzo --storage SLS-001 --node pegasus
    INFO: Starting Backup of VM 2222 (openvz)
    INFO: CTID 2222 exist mounted running
    INFO: status = running
    INFO: backup mode: snapshot
    INFO: ionice priority: 7
    INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-pegasus-0')
    INFO: Logical volume "vzsnap-pegasus-0" created
    INFO: creating archive '/mnt/pve/SLS-001/dump/vzdump-openvz-2222-2013_03_07-09_22_26.tar.lzo'
    INFO: Total bytes written: 358469242880 (334GiB, [SIZE=5][U][I][B][FONT=arial black]16MiB/s[/FONT][/B][/I][/U][/SIZE])
    INFO: archive file size: 242.14GB
    INFO: Finished Backup of VM 2222 (06:47:40)
    INFO: Backup job finished successfully
    TASK OK

    Do you have a idea or information for us, please. very thanks.

    regards
     
  9. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,469
    Likes Received:
    395
    with bwlimit you cannot speedup anything, its a limit, means you can make it slower.

    if you need faster backups, use a faster storage hardware and network.

    But a container with 334GB? I always run much smaller containers and the data outside (and use bind mounts).
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi tom,

    the storage is fast synology storage, i can transfer normal iso files to storage with more than 200mb/s, only ct backups are every time local on hostsystem and to storage max 16mb/s. on all backups we have 16 mb/s on ct backups. if i create a file on storage with dd i have 207 mb/s, if i create a file local on raid 5 in host system i hva 791mb/s.

    do you have any ideas, what we can do to have more speed on backups. i hope soo. very thanks

    regards
     
  11. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,469
    Likes Received:
    395
    if you run the CT´s on the local file system, describe your setup:

    - which raid controller?
    - disks?
    - raidlevel?
    - do you use ext3?
    - results of 'pveperf'?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi tom,

    local we use on this a hp dl380 with 2x 4x 3ghz (qc)
    8x 1tb seagate enterprise hdds 7200 in raid 5 (ST91000640SS)
    raidcontroller p400 with bbu + 512mb cache (r/w - 50/50)
    filsystem is original with proxmox dvd installed

    Code:
     pveperf
    CPU BOGOMIPS:      48002.36
    REGEX/SECOND:      1021492
    HD SIZE:           94.49 GB (/dev/mapper/pve-root)
    BUFFERED READS:    78.31 MB/sec
    AVERAGE SEEK TIME: 14.96 ms
    FSYNCS/SECOND:     1878.55
    DNS EXT:           75.36 ms
    DNS INT:           60.14 ms (myserver.de)
    
     
  13. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,469
    Likes Received:
    395
    th p400 is not known as to be very fast, also your disks are very slow (and access times) are not that great. so your backup speed you get is expected.

    with a ST91000640SS you can get a max speed ot 115Mb/s - according to the seagate specs. as far as I see these are the slowest 7200 rpm drivers you can get for your servers. if you use raid10, you can improve performance.

    I am not familiar with the new HP raid controllers but if you use a current model of Adaptec or LSI, raid10 with 6 fast disks (sas or sata disks) you can expect MUCH better results. Also think of using a SSD for caching, Adaptec and also LSI are offering controllers where you can mix SSD and SAS/SATA - check their offerings.

    If you cannot change hardware, reduce the size of the container or try suspend mode (without LVM).
    (see http://pve.proxmox.com/wiki/Backup_and_Restore)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,450
    Likes Received:
    308
    Do you have many small files inside the container? This would also explain the behavior because your disk seek times are very bad (15ms!)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi dietmar,

    yes we have many small files on hdd. it´s the problem with speed right, well p400 works fine. but @tom, lsi is better it is right. very thanks

    regards
     
  16. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi,

    on vm backup we have a speed
    Code:
    5531: Mär 08 01:09:09 INFO: transferred 2306397 MB in 3019 seconds (763 MB/s)
    only all ct´s have slow speed.

    regards
     
  17. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,469
    Likes Received:
    395
    VM and CT are completely different, so you can´t compare these numbers. VM uses virtual disks (or block devices), CT are running on a file system (ext3/simfs).

    e.g. if you have a KVM virtual disk with 32 GB but just 4 GB used, you will see high back speed numbers as the new backup detects the empty space and this is much faster than reading and writing real data like small files inside a CT.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Ok, very thanks for this information.

    regards
     
  19. informant

    informant Member

    Joined:
    Jan 31, 2012
    Messages:
    673
    Likes Received:
    6
    Hi tom,

    on one ct we have the same problem after change the limit. The error comes with high limit. The maschine get max 100 mb of new data in running backup, but backup failed.

    Code:
    bwlimit: 0 size: 4096
    Code:
    INFO: trying to get global lock - waiting...
    INFO: got global lock
    INFO: starting new backup job: vzdump 5132 --quiet 1 --mailto info@domain.de --mode snapshot --compress lzo --storage SLS-001 --node titan
    INFO: Starting Backup of VM 5132 (openvz)
    INFO: CTID 5132 exist mounted running
    INFO: status = running
    INFO: backup mode: snapshot
    INFO: ionice priority: 7
    INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-titan-0')
    INFO: Logical volume "vzsnap-titan-0" created
    INFO: creating archive '/mnt/pve/SLS-001/dump/vzdump-openvz-5132-2013_03_11-00_31_27.tar.lzo'
    INFO: tar: ./var/www/cbm161/htdocs/gallery/_data/i/galleries/2012/Mai/2012-05-27_Tony_Jantschke_Camp_in_FFF/2012-05-27_WM_140-me.JPG: Read error at byte 0, while reading 8704 bytes: Input/output error
    INFO: tar: ./var/www/cbm161/htdocs/gallery/_data/i/galleries/2012/Mai/2012-05-27_Tony_Jantschke_Camp_in_FFF/2012-05-27_WM_266-th.JPG: Read error at byte 0, while reading 9216 bytes: Input/output error
    INFO: tar: ./var/www/cbm161/htdocs/gallery/_data/i/galleries/2012/Mai/2012-05-27_Tony_Jantschke_Camp_in_FFF/2012-05-27_WM_090-th.JPG: Read error at byte 0, while reading 7168 bytes: Input/output error
    INFO: tar: ./var/www/cbm161/htdocs/gallery/_data/i/galleries/2012/Mai/2012-05-27_Tony_Jantschke_Camp_in_FFF/2012-05-27_WM_439-sq.JPG: Read error at byte 0, while reading 7168 bytes: Input/output error
    ...
    INFO: tar: ./tmp/zend_cache---Zend_LocaleL_de_DE_symbols_: Read error at byte 0, while reading 269 bytes: Input/output error
    INFO: tar: ./tmp/wsdl-cbm158-a0f4c036303af5fa38b3d0f8dee5f9ab: Read error at byte 0, while reading 1024 bytes: Input/output error
    INFO: tar: ./dev/.udev/db/net\:eth1: Read error at byte 0, while reading 163 bytes: Input/output error
    INFO: tar: ./dev/.udev/db/net\:eth0: Read error at byte 0, while reading 163 bytes: Input/output error
    INFO: tar: ./dev/.udev/db/net\:eth2: Read error at byte 0, while reading 162 bytes: Input/output error
    INFO: Total bytes written: 175245127680 (164GiB, 62MiB/s)
    INFO: tar: Exiting with failure status due to previous errors
    ERROR: Backup of VM 5132 failed - command '(cd /mnt/vzsnap0/private/5132;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|lzop) >/mnt/pve/SLS-001/dump/vzdump-openvz-5132-2013_03_11-00_31_27.tar.dat' failed: exit code 2
    INFO: Backup job finished with errors
    postdrop: warning: uid=0: File too large
    sendmail: fatal: root(0): message file too big
    TASK ERROR: job errors
    What can we do here, to backup this ct without errors? Do we must set the size in vzdump.conf on all nodes? Can you help please. Very thanks.

    regards
     
    #19 informant, Mar 11, 2013
    Last edited: Mar 11, 2013
  20. vvk

    vvk New Member

    Joined:
    Dec 12, 2014
    Messages:
    2
    Likes Received:
    0
    JFYI, we added a patch to display lvm snapshow usage after backup is done:

    Code:
    --- /tmp/OpenVZ.pm      2015-03-03 14:59:04.274408131 +0500
    +++ /usr/share/perl5/PVE/VZDump/OpenVZ.pm       2015-03-03 15:20:13.940244646 +0500
    @@ -281,6 +281,9 @@
         }
     
         if ($task->{cleanup}->{lvm_snapshot}) {
    +       # log snapshot usage before destroying it
    +       $self->loginfo("snapshot usage is:");
    +       $self->cmd("lvs $di->{snapdev}");
            # loop, because we often get 'LV in use: not deactivating'
            # we use run_command() because we do not want to log errors here
            my $wait = 1;
    
    
    
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice