Okay, I can confirm:
Downgrading pve-container to 2.0-31 helps (2.0-33 has the same issue and 2.0-32 was somehow unavailable on my system).
This seems to work without any dependency problems.
Should I create an issue on bugzilla.proxmox.com or is this done by anyone else?
Yeah there are actually two reasons:
1. The space problem: All disks in the server (except the one with the Proxmox OS) are ZFS formatted, so they can't handle dump files and the backups are done to an external NFS storage which is noch accessible from the new host (done by a different company...
At least I found the source of the `Use of uninitialized value $et in string eq`-isse: in `/etc/pve/user.cfg` there was one line just with a ":" and nothing else. I removed that line and these five error messages are gone, but the pct restore error still exists...
Hi mbosma,
thanks for your reply.
I tried it again with you command but again the same error:
root@host:~# ssh old_host 'vzdump 7701 --mode stop --stdout' | pv | pct restore --rootfs 4 7701 --storage ssd -
unable to restore CT 7701 - ERROR: file '-' does not exist
Use of uninitialized value...
Heyho!
I've setup up two no hosts in a new datacenter and tried to move LXCs from the old hosts via vzdump & ssh-pipe. This failed several times and I could break the problem down to `pct restore` from pipe:
root@host:~# vzdump 99999 --stdout | pct restore --storage ssd --rootfs 1 99991 -...
I assume none of it. There are a few large ones, but the main problem is, that Proxmox waits until each LXC is booted. A small one needs about 10-30s, a large one 1-2min. With more than 20 LXCs on one host, it needs some time if everything is up again. And I'm pretty sure my hardware has enough...
Hi Proxmox-Team!
I've "Start at boot" enabled for most of my LXCs. After a host-reboot (e.g. after a kernel-upgrade) it takes about 10-20min until all LXCs are online again, because Proxmox waits for each LXC before starting the next one.
Wouldn't it be possible to start all LXCs at the same...
On one of my four cluster hosts there is a NFS server, which is based on a ZFS raid.
Yes, I'm on the latest updates:
proxmox-ve: 5.1-30 (running kernel: 4.13.8-3-pve)
pve-manager: 5.1-38 (running version: 5.1-38/1e9bc777)
Hello everyone!
I have a strange problem with my LXCs the last days and I already spent hours during x-mas, but couldn't solve it...
Here is the "history":
Last week (on 21st & 22nd Dec) two of my four cluster-hosts crashed and nothing worked on it (only question marks in web panel and no...
Hello dear Proxmox-Team!
IPv4 is (or should be) obsolete and IPv4-addresses are expensive, so I'm always looking for possibilities to save IPv4-addresses of our company and our customers.
One thought was to disable IPv4 on the Proxmox Hosts. They are only accessed by our admins and they all...
Not sure, but since I have the bond configured, eht0 and eth1 have the same MAC:
bond0 Link encap:Ethernet HWaddr 98:4b:e1:7f:12:7a
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:144473409 errors:0 dropped:114002 overruns:0 frame:0
TX...
Okay, I found some interesting on the container:
`ip -6 neigh` shows "FAILED" on eth0.
I looked at this:
root@network-test:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8...
Okay, that's quite interesting: the IPv6 neighbor packages are receiving the container:
from a external device:
# ping6 2a05:bec0:2:6:2::9999
PING 2a05:bec0:2:6:2::9999(2a05:bec0:2:6:2::9999) 56 data bytes
From 2001:7f8::3:1cf:0:1 icmp_seq=1 Destination unreachable: Address unreachable
From...
Please sorry, here are the unedited versions:
on host:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.