ERROR: vma_queue_write: write error - Broken pipe

Discussion in 'Proxmox VE: Installation and configuration' started by Ivan Gonzalez, Sep 16, 2018.

  1. Ivan Gonzalez

    Ivan Gonzalez Member

    Joined:
    Jan 20, 2014
    Messages:
    66
    Likes Received:
    0
    i get this on only 1 VM while backing up, the others dont happen not sure why


    INFO: starting new backup job: vzdump 110 --mode snapshot --compress lzo --mailnotification always --quiet 1 --mailto ivanjr@nocroom.com --storage nfs-105
    INFO: Starting Backup of VM 110 (qemu)
    INFO: status = running
    INFO: update VM 110: -lock backup
    INFO: VM Name: CID203-PBX
    INFO: include disk 'sata0' 'local-lvm:vm-110-disk-1' 2100G
    INFO: backup mode: snapshot
    INFO: ionice priority: 7
    INFO: creating archive '/mnt/pve/nfs-105/dump/vzdump-qemu-110-2018_09_16-02_00_01.vma.lzo'
    INFO: started backup task '759e97b9-1dcd-410f-a313-86951c65c01d'
    INFO: status: 0% (456130560/2254857830400), sparse 0% (149807104), duration 3, 152/102 MB/s
    INFO: status: 1% (22677291008/2254857830400), sparse 0% (889499648), duration 233, 96/93 MB/s
    INFO: status: 2% (45145063424/2254857830400), sparse 0% (1554042880), duration 465, 96/93 MB/s
    INFO: status: 3% (67730145280/2254857830400), sparse 0% (2498838528), duration 740, 82/78 MB/s
    INFO: status: 4% (90363265024/2254857830400), sparse 0% (3444244480), duration 1015, 82/78 MB/s
    INFO: status: 5% (112789684224/2254857830400), sparse 0% (4272078848), duration 1300, 78/75 MB/s
    INFO: status: 6% (135428964352/2254857830400), sparse 0% (5270265856), duration 1555, 88/84 MB/s
    INFO: status: 7% (157885792256/2254857830400), sparse 0% (6380662784), duration 1796, 93/88 MB/s
    INFO: status: 8% (180540276736/2254857830400), sparse 0% (7352872960), duration 2050, 89/85 MB/s
    INFO: status: 9% (203042717696/2254857830400), sparse 0% (8446140416), duration 2324, 82/78 MB/s
    INFO: status: 10% (225590771712/2254857830400), sparse 0% (9560981504), duration 2568, 92/87 MB/s
    INFO: status: 11% (248078008320/2254857830400), sparse 0% (10660569088), duration 2849, 80/76 MB/s
    INFO: status: 12% (270617411584/2254857830400), sparse 0% (11770339328), duration 3124, 81/77 MB/s
    INFO: status: 13% (293280546816/2254857830400), sparse 0% (13026332672), duration 3383, 87/82 MB/s
    INFO: status: 14% (315763982336/2254857830400), sparse 0% (14308524032), duration 3646, 85/80 MB/s
    INFO: status: 15% (338274025472/2254857830400), sparse 0% (15679827968), duration 3890, 92/86 MB/s
    INFO: status: 16% (360928444416/2254857830400), sparse 0% (16969453568), duration 4131, 94/88 MB/s
    INFO: status: 17% (383430950912/2254857830400), sparse 0% (18351190016), duration 4378, 91/85 MB/s
    INFO: status: 18% (405906784256/2254857830400), sparse 0% (19581796352), duration 4634, 87/82 MB/s
    INFO: status: 19% (428503531520/2254857830400), sparse 0% (20943134720), duration 4898, 85/80 MB/s
    INFO: status: 20% (451086516224/2254857830400), sparse 0% (22199062528), duration 5137, 94/89 MB/s
    INFO: status: 21% (473642172416/2254857830400), sparse 1% (23526105088), duration 5416, 80/76 MB/s
    INFO: status: 22% (496087269376/2254857830400), sparse 1% (24745619456), duration 5658, 92/87 MB/s
    INFO: status: 23% (518736052224/2254857830400), sparse 1% (26194481152), duration 5895, 95/89 MB/s
    INFO: status: 24% (541297999872/2254857830400), sparse 1% (27523837952), duration 6146, 89/84 MB/s
    INFO: status: 25% (563845791744/2254857830400), sparse 1% (28822945792), duration 6396, 90/84 MB/s
    INFO: status: 26% (586410688512/2254857830400), sparse 1% (30131793920), duration 6650, 88/83 MB/s
    INFO: status: 27% (608922894336/2254857830400), sparse 1% (31624286208), duration 6899, 90/84 MB/s
    INFO: status: 28% (631448141824/2254857830400), sparse 1% (33053663232), duration 7136, 95/89 MB/s
    INFO: status: 29% (653930987520/2254857830400), sparse 1% (34393911296), duration 7390, 88/83 MB/s
    INFO: status: 30% (676564762624/2254857830400), sparse 1% (35703549952), duration 7627, 95/89 MB/s
    INFO: status: 31% (699159740416/2254857830400), sparse 1% (37194743808), duration 7862, 96/89 MB/s
    INFO: status: 32% (721583341568/2254857830400), sparse 1% (38620278784), duration 8104, 92/86 MB/s
    INFO: status: 33% (744268234752/2254857830400), sparse 1% (40092438528), duration 8354, 90/84 MB/s
    INFO: status: 34% (766728273920/2254857830400), sparse 1% (41524240384), duration 8623, 83/78 MB/s
    lzop: No space left on device: <stdout>
    INFO: status: 34% (769408696320/2254857830400), sparse 1% (41778450432), duration 8661, 70/63 MB/s
    ERROR: vma_queue_write: write error - Broken pipe
    INFO: aborting backup job
    ERROR: Backup of VM 110 failed - vma_queue_write: write error - Broken pipe
    INFO: Backup job finished with errors
    TASK ERROR: job errors
     
  2. Ivan Gonzalez

    Ivan Gonzalez Member

    Joined:
    Jan 20, 2014
    Messages:
    66
    Likes Received:
    0
    and it has a lot of space left
     
  3. dietmar

    dietmar Proxmox Staff Member
    Staff Member

    Joined:
    Apr 28, 2005
    Messages:
    16,168
    Likes Received:
    268
    But the error message says: "lzop: No space left on device" .
     
  4. Ivan Gonzalez

    Ivan Gonzalez Member

    Joined:
    Jan 20, 2014
    Messages:
    66
    Likes Received:
    0
    I know, which is why I'm very confused.
     
  5. sahostking

    sahostking Member
    Proxmox VE Subscriber

    Joined:
    Oct 6, 2015
    Messages:
    296
    Likes Received:
    6
    Whenever I received that error it was always diskspace related. Either actuall disk space or inodes, etc.
     
  6. Ivan Gonzalez

    Ivan Gonzalez Member

    Joined:
    Jan 20, 2014
    Messages:
    66
    Likes Received:
    0
    I see, maybe node what you do to fix ?
     
  7. Ivan Gonzalez

    Ivan Gonzalez Member

    Joined:
    Jan 20, 2014
    Messages:
    66
    Likes Received:
    0
    I have all the backups going to NFS
     
  8. sahostking

    sahostking Member
    Proxmox VE Subscriber

    Joined:
    Oct 6, 2015
    Messages:
    296
    Likes Received:
    6
    You clear the space usually. Have you checked your nfs location for actual diskspace used aswell as your local disks on your node? one of them surely has run out of space.

    try loading on both servers.

    df -i

    and

    df -h

    to see.
     
  9. Ivan Gonzalez

    Ivan Gonzalez Member

    Joined:
    Jan 20, 2014
    Messages:
    66
    Likes Received:
    0
    main proxmox df -i
     
  10. Ivan Gonzalez

    Ivan Gonzalez Member

    Joined:
    Jan 20, 2014
    Messages:
    66
    Likes Received:
    0
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice