vma_queue_write: write error - Broken pipe

Discussion in 'Proxmox VE: Installation and configuration' started by jtvdw, Dec 6, 2018.

  1. jtvdw

    jtvdw New Member

    Joined:
    Dec 6, 2018
    Messages:
    5
    Likes Received:
    0
    Hi,

    I'm getting this error: "vma_queue_write: write error - Broken pipe" with output below:

    ===
    INFO: starting new backup job: vzdump 500 600 700 800 --mailnotification always --quiet 1 --mailto email@email --compress gzip --mode snapshot --storage local
    INFO: Starting Backup of VM 500 (qemu)
    INFO: status = running
    INFO: update VM 500: -lock backup
    INFO: VM Name: server
    INFO: include disk 'sata0' 'local-lvm:vm-500-disk-1' 160G
    INFO: backup mode: snapshot
    INFO: ionice priority: 7
    INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-500-2018_12_06-00_00_01.vma.gz'
    INFO: started backup task '4659ddfa-521f-405a-ac85-32ead6c0b78f'
    INFO: status: 0% (64618496/171798691840), sparse 0% (6344704), duration 3, read/write 21/19 MB/s
    INFO: status: 1% (1733296128/171798691840), sparse 0% (128512000), duration 89, read/write 19/17 MB/s
    INFO: status: 2% (3436183552/171798691840), sparse 0% (137908224), duration 187, read/write 17/17 MB/s
    INFO: status: 3% (5154275328/171798691840), sparse 0% (152936448), duration 271, read/write 20/20 MB/s
    INFO: status: 4% (6891372544/171798691840), sparse 0% (171802624), duration 371, read/write 17/17 MB/s
    INFO: status: 5% (8609464320/171798691840), sparse 0% (230920192), duration 468, read/write 17/17 MB/s
    INFO: status: 6% (10316152832/171798691840), sparse 0% (262688768), duration 565, read/write 17/17 MB/s
    INFO: status: 7% (12038045696/171798691840), sparse 0% (359682048), duration 638, read/write 23/22 MB/s
    INFO: status: 8% (13748535296/171798691840), sparse 0% (371773440), duration 725, read/write 19/19 MB/s
    INFO: status: 9% (15462825984/171798691840), sparse 0% (375214080), duration 815, read/write 19/19 MB/s
    INFO: status: 10% (17196122112/171798691840), sparse 0% (407920640), duration 908, read/write 18/18 MB/s
    INFO: status: 11% (18910412800/171798691840), sparse 0% (414490624), duration 1000, read/write 18/18 MB/s
    INFO: status: 12% (20624703488/171798691840), sparse 0% (425209856), duration 1087, read/write 19/19 MB/s
    INFO: status: 13% (22338994176/171798691840), sparse 0% (472186880), duration 1178, read/write 18/18 MB/s
    INFO: status: 14% (24068489216/171798691840), sparse 0% (474386432), duration 1273, read/write 18/18 MB/s
    INFO: status: 15% (25835995136/171798691840), sparse 0% (547229696), duration 1365, read/write 19/18 MB/s
    INFO: status: 16% (27489468416/171798691840), sparse 0% (605716480), duration 1454, read/write 18/17 MB/s
    INFO: status: 17% (29218963456/171798691840), sparse 0% (607313920), duration 1545, read/write 19/18 MB/s
    INFO: status: 18% (30933778432/171798691840), sparse 0% (631803904), duration 1635, read/write 19/18 MB/s
    INFO: status: 19% (32658948096/171798691840), sparse 0% (632762368), duration 1725, read/write 19/19 MB/s
    INFO: status: 20% (34361835520/171798691840), sparse 0% (633516032), duration 1797, read/write 23/23 MB/s
    INFO: status: 21% (36102733824/171798691840), sparse 0% (635076608), duration 1884, read/write 20/19 MB/s
    INFO: status: 22% (37813092352/171798691840), sparse 0% (640946176), duration 1964, read/write 21/21 MB/s
    INFO: status: 23% (39531315200/171798691840), sparse 0% (687525888), duration 2042, read/write 22/21 MB/s
    INFO: status: 24% (41250586624/171798691840), sparse 0% (712798208), duration 2128, read/write 19/19 MB/s
    INFO: status: 25% (42967498752/171798691840), sparse 0% (715358208), duration 2213, read/write 20/20 MB/s
    INFO: status: 26% (44670386176/171798691840), sparse 0% (715468800), duration 2305, read/write 18/18 MB/s
    INFO: status: 27% (46392279040/171798691840), sparse 0% (715468800), duration 2407, read/write 16/16 MB/s
    INFO: status: 28% (48106569728/171798691840), sparse 0% (715608064), duration 2499, read/write 18/18 MB/s
    INFO: status: 29% (49841766400/171798691840), sparse 0% (716537856), duration 2593, read/write 18/18 MB/s
    INFO: status: 30% (51550355456/171798691840), sparse 0% (716865536), duration 2685, read/write 18/18 MB/s
    INFO: status: 31% (53268447232/171798691840), sparse 0% (716881920), duration 2782, read/write 17/17 MB/s
    INFO: status: 32% (54978936832/171798691840), sparse 0% (716881920), duration 2878, read/write 17/17 MB/s
    INFO: status: 33% (56700829696/171798691840), sparse 0% (717217792), duration 2977, read/write 17/17 MB/s
    INFO: status: 34% (58422722560/171798691840), sparse 0% (727195648), duration 3068, read/write 18/18 MB/s
    INFO: status: 35% (60134785024/171798691840), sparse 0% (727207936), duration 3166, read/write 17/17 MB/s

    gzip: stdout: No space left on device
    INFO: status: 35% (60939042816/171798691840), sparse 0% (727220224), duration 3219, read/write 15/15 MB/s
    ERROR: vma_queue_write: write error - Broken pipe
    INFO: aborting backup job
    ERROR: Backup of VM 500 failed - vma_queue_write: write error - Broken pipe
    ===

    I did notice the "No space left on device" and increased the size already but still fails. Is this the main node's disk space or the vm's disk space?

    If anyone came accross this and or can help, will be thanksful.

    Much appreciated.

    Thanks.
     
  2. oguz

    oguz Proxmox Staff Member
    Staff Member

    Joined:
    Nov 19, 2018
    Messages:
    32
    Likes Received:
    2
    I'm guessing since `--storage local` was passed, it should be the main node's disk space. You can how much space is being used with
    Code:
    df -h /
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. jtvdw

    jtvdw New Member

    Joined:
    Dec 6, 2018
    Messages:
    5
    Likes Received:
    0
    Hi,

    Please see command output below:

    user@server:~$ df -h /
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/pve-root 94G 56G 34G 63% /

    How do I go forward increaseing the disk size, lvextend & risize2fs commands?

    Thanks.
     
  4. oguz

    oguz Proxmox Staff Member
    Staff Member

    Joined:
    Nov 19, 2018
    Messages:
    32
    Likes Received:
    2
    It looks like you only have 34G space left on the pve-root, but your VM seems to be about 160G if I'm not wrong. You will need bigger storage.

    https://www.tldp.org/HOWTO/LVM-HOWTO/extendlv.html
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. jtvdw

    jtvdw New Member

    Joined:
    Dec 6, 2018
    Messages:
    5
    Likes Received:
    0
    Hi,

    Thanks, I followed the steps on the link but can't seem to get it going.

    And I don't really have sudo privilages.

    Please let me know.

    Thanks.
     
  6. jtvdw

    jtvdw New Member

    Joined:
    Dec 6, 2018
    Messages:
    5
    Likes Received:
    0
    Hi,

    Any other options I can try, maybe through the PVE Web Gui?

    Please let me know.

    Thanks.
     
  7. oguz

    oguz Proxmox Staff Member
    Staff Member

    Joined:
    Nov 19, 2018
    Messages:
    32
    Likes Received:
    2
    You can try backing up to another (bigger) storage.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
    mbaldini likes this.
  8. jtvdw

    jtvdw New Member

    Joined:
    Dec 6, 2018
    Messages:
    5
    Likes Received:
    0
    Hi,

    Is there no other way to increase the size of the disk without installing any software to increase it, like apt install <software>, like on proxmox webgui?

    Why does proxmox only create a 100gb HDD when installing, probably shouldi've changed it myself?

    The machine does have a TB HDD installed, should I just add additional storage and save the backups there, ie. /backups?

    Thanks.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice