Backing up external HDD HTTP/2.0 connection failed

Florius

Well-Known Member
Jul 2, 2017
35
9
48
56
Hi,

I am unable to backup from 1 external HDD to another external HDD:

Code:
append chunks list len (64)
HTTP/2.0 connection failed
"Filepath"
catalog upload error - channel closed

I am however able to backup the root disk to the external HDD, and can also backup about 10 LXC containers.
Please advice, thank you!
 
Maybe just a network failure. Have you tried multiple times?
It's a LXC container, all happening on a local network (within the same server, even(. I tried 10 times, keeps giving the same error on different files.
 
Anything special about that container? Please try to find out whats different to the other containers.
 
Anything special about that container? Please try to find out whats different to the other containers.

My apologies; the external HDD is connected to the server, on which runs a LXC container with another external HDD with passthrough.
So it happens from a physical machine to a container.
I can copy the internal SSD just fine though. I guess the problem is the external HDD, but why?
 
My apologies; the external HDD is connected to the server, on which runs a LXC container with another external HDD with passthrough.
So it happens from a physical machine to a container.
I can copy the internal SSD just fine though. I guess the problem is the external HDD, but why?
Maybe you can share your LXC configuration? It quite hard to debug such things without any information...

Or even better: Provide us with the steps to create such container, so that we can reproduce the bug here.
 
Maybe you can share your LXC configuration? It quite hard to debug such things without any information...

Or even better: Provide us with the steps to create such container, so that we can reproduce the bug here.

Understandable, I just wasn't sure what information was required.
I create them with Terraform.

Code:
arch: amd64
cmode: tty
console: 1
cpulimit: 0
cpuunits: 1024
hostname: backup
memory: 512
mp0: /backup,mp=/backup
net0: name=eth0,bridge=vmbr0,hwaddr=0E:45:34:53:FB:82,ip=dhcp,type=veth
onboot: 0
ostype: debian
protection: 0
rootfs: local-lvm:vm-109-disk-0,size=10G
swap: 512
tty: 2

No clue if it's useful, but whatever:

Code:
terraform {
  required_providers {
    proxmox = {
      source = "Telmate/proxmox"
      version = "2.7.0"
    }
  }
}


provider "proxmox" {
    pm_api_url = "https://server-01/api2/json"
}


locals {
  containers = [
    { hostname = "unifi", template = "local:vztmpl/debian-9.0-standard_9.7-1_amd64.tar.gz", memory = "2048", grains = { roles = ["unifi", "haproxy", "mongodb"] } },
    { hostname = "icinga", template = "local:vztmpl/debian-10-standard_10.5-1_amd64.tar.gz", grains = { roles = ["icinga-master", "mysql", "redis", "apache"]} },
    { hostname = "pihole", template = "local:vztmpl/debian-10-standard_10.5-1_amd64.tar.gz", ip = "192.168.1.253/24", gw = "192.168.1.1", grains = { roles = ["pihole", "lighttpd"]} },
    { hostname = "prometheus", template = "local:vztmpl/debian-10-standard_10.5-1_amd64.tar.gz", grains = { roles = ["prometheus", "haproxy"]} },
    { hostname = "grafana", template = "local:vztmpl/debian-10-standard_10.5-1_amd64.tar.gz", size = "30G", grains = { roles = ["influxdb", "grafana", "haproxy"]} },
    { hostname = "librenms", template = "local:vztmpl/debian-10-standard_10.5-1_amd64.tar.gz" , grains = { roles = ["librenms", "mysql", "apache"]} },
    { hostname = "smokeping", template = "local:vztmpl/debian-10-standard_10.5-1_amd64.tar.gz" , grains = { roles = ["smokeping", "apache"]} },
    { hostname = "mqttbroker", template = "local:vztmpl/debian-10-standard_10.5-1_amd64.tar.gz", grains = { roles = ["mqttbroker", "haproxy"] } },
    { hostname = "homeassistant", template = "local:vztmpl/ubuntu-20.04-standard_20.04-1_amd64.tar.gz", grains = { roles = ["homeassistant", "haproxy"] } },
    { hostname = "backup", template = "local:vztmpl/debian-10-standard_10.5-1_amd64.tar.gz", grains = { roles = ["backup"] } },
  ]
}


resource "proxmox_lxc" "create_container" {
  count = length(local.containers)


  vmid = 100+count.index
  hostname = lookup(local.containers[count.index], "hostname")
  ostemplate = lookup(local.containers[count.index], "template")
  target_node = "server-01"
  ostype = "debian"
  start = true
  memory = lookup(local.containers[count.index], "memory", "512")
  rootfs {
    storage = lookup(local.containers[count.index], "storage", "local-lvm")
    size = lookup(local.containers[count.index], "size", "10G")
  }


  network {
    name   = "eth0"
    bridge = "vmbr0"
    ip     = lookup(local.containers[count.index], "ip", "dhcp")
    gw     = lookup(local.containers[count.index], "gw", null)
  }


  provisioner "local-exec" {
    when    = destroy
    command = <<EOT
      sudo salt-key --force-color --yes -d ${self.hostname}
      EOT
  }


  provisioner "local-exec" {
    command = <<EOT
      sleep 5s
      sudo pct exec ${count.index+100} -- /bin/bash -c "apt update && apt install -y curl sudo"
      sudo pct exec ${count.index+100} -- /bin/bash -c "curl -L https://bootstrap.saltproject.io | sudo sh -s -- -X -x python3 stable 3003"
      sudo pct exec ${count.index+100} -- /bin/bash -c "systemctl stop salt-minion"
      sudo pct exec ${count.index+100} -- /bin/bash -c "echo 'master: server-01 > /etc/salt/minion.d/99-master-address.conf"
      sudo lxc-attach ${count.index+100} -- /bin/bash -c "echo '${yamlencode(lookup(local.containers[count.index], "grains"))}' > /etc/salt/grains"
      sudo pct exec ${count.index+100} -- /bin/bash -c "systemctl restart salt-minion"
      sleep 5s
      sudo salt-key --force-color --yes -a  ${lookup(local.containers[count.index], "hostname")}
    EOT
  }
}

Please let me know what else I can provide to help!

EDIT: Added a screenshot, it also just doesn't complete in the UI.
There are no more running task, and the status is "stopped: unknown".
The last few log lines from the UI:

2021-05-24T09:09:08+00:00: POST /dynamic_chunk
2021-05-24T09:09:08+00:00: upload_chunk done: 3245323 bytes, c9c21db43c0d61afb3d9fdaddeb9181cb858c1ed1cd5ad6e4f96cad4140d5724
2021-05-24T09:09:08+00:00: upload_chunk done: 5053531 bytes, cb759c64d8c5a60a363c256c04589f81120d916b2993947faf535ef7ca8779d3
2021-05-24T09:09:09+00:00: POST /dynamic_chunk
2021-05-24T09:09:09+00:00: upload_chunk done: 2433474 bytes, 60e30a96c5ebc4d3ed9ace2fbadd24ae8e1f6b0c9218535fdede539ce8c6441e
2021-05-24T09:09:09+00:00: upload_chunk done: 2135478 bytes, d299a3e97c1728127080a359afcbefdee731ded7f7518dfde003903e68844ccf
2021-05-24T09:09:09+00:00: POST /dynamic_chunk
2021-05-24T09:09:09+00:00: POST /dynamic_chunk

EDIT2: It are non-native Linux files though, like .exe, .rar, .bin fils and such. Could this have anything to do with it?

Thanks!
 

Attachments

  • Schermafbeelding 2021-05-24 om 11.11.09.png
    Schermafbeelding 2021-05-24 om 11.11.09.png
    89.3 KB · Views: 1
Last edited:
What kind of file systen is /backup. Does backup work if you (temporaily) remove that mount point?
 
Also, please make sure you run the latest version...
Thanks Dietmar for your advice.

I run version 1.1.7-1 as client, and 1.1-5 as server.
The file system is ext4.

I'm afraid I don't understand your comment regarding the unmount.
I unmounted on the server side, and tried again, resulting in the same error.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!