pct restore: unable to parse config line

postcd

Renowned Member
Sep 16, 2013
41
0
71
# pct restore 101 /vzdump-3810.tgz
unable to parse config line: ▒▒▒▒=

i know this config line is unimportant, how can i proceed pct the restore even there is that bad config line please?

Also in which file this config line is inside that openvz .tgz?
 
Just to be sure, I wonder if container 101 already exists? If so, please post the content of

# cat /etc/pve/lxc/101.conf

Also in which file this config line is inside that openvz .tgz?

./etc/vzdump/vps.conf

I guess you need to modify that file inside the archive to be able to extract it.
 
101 not existed, this was new server.
> I guess you need to modify that file inside the archive to be able to extract it.

i spent some time trying to extract is and then update it back to archive, but repeatedly ending up with error: "archive contains no configuration file"
gzip -d archive.tgz
tar -xf archive.tar ./etc/vzdump/vps.conf
..modify..
tar -uf archive.tar /etc/vzdump/vps.conf
tar -czf vzdump.tgz archive.tar
pct list;pct restore 105 archive.tgz
-> "archive contains no configuration file"

unsure what i did wrong, maybe paths syntax or i could delete file instead of updating. If anyone know how, please let me know.
 
The vps.conf need to be the first file inside the archive.

Code:
# mkdir data
# cd data
# tar xpf ../archive.tgz
# ... modify...
# tar -czpf ../archive-new.tgz ./etc/vzdump/vps.conf .
 
  • Like
Reactions: postcd
I have been trying for few days move our current containers based on openvz CentOS 7 to new LXC Proxmox 5.2. I have backup of all containers in tarball. Each container is in separate tarball backup from /vz/private/VEID

When I tried command :
pct restore NEW-VMID vzdump-file

I end up with this message
ERROR: archive contains no configuration file


Current containers kernel :
uname -r
3.10.0-514.16.1.vz7.30.15

What it might be wrong?
 
you need to either use 'pct create' or pass all the mandatory options to 'pct restore', or re-pack your archives to include configuration files.
 
I have tar archive once again as was suggested above
tar -czpf ../archive-new.tgz ./etc/vzdump/vps.conf .
Now restore pass this point and I end up with different message:
pct restore 1457 archive-new.tgz -storage containers

file does not look like a template archive: /var/lib/vz/dump/archive-new.tgz

Any suggestions would be greatly appreciated. Thx
 
I have attached also part of tarbal tree. Maybe here is something wrong..

root@ns3111642:/var/lib/vz/dump/ tar -xzvf archive-new.tgz
var/lib/vz/dump/vps.conf/
var/lib/vz/dump/vps.conf/media/
var/lib/vz/dump/vps.conf/aquota.group
var/lib/vz/dump/vps.conf/fastboot
var/lib/vz/dump/vps.conf/boot/
var/lib/vz/dump/vps.conf/.cpt_hardlink_dir_a920e4ddc233afddc9fb53d26c392319/
var/lib/vz/dump/vps.conf/etc/
var/lib/vz/dump/vps.conf/etc/lsb-base-logging.sh
var/lib/vz/dump/vps.conf/etc/mtools.conf
var/lib/vz/dump/vps.conf/etc/warnquota.conf
var/lib/vz/dump/vps.conf/etc/bash_completion.d/
var/lib/vz/dump/vps.conf/etc/bash_completion.d/initramfs-tools
var/lib/vz/dump/vps.conf/etc/bash_completion.d/axi-cache
var/lib/vz/dump/vps.conf/etc/bash_completion.d/debconf
var/lib/vz/dump/vps.conf/etc/bash_completion.d/apache2.2-common
var/lib/vz/dump/vps.conf/etc/bash_completion.d/upstart
var/lib/vz/dump/vps.conf/etc/bash_completion.d/insserv
var/lib/vz/dump/vps.conf/etc/bash_completion.d/apt-show-versions
var/lib/vz/dump/vps.conf/etc/sudoers.d/
var/lib/vz/dump/vps.conf/etc/sudoers.d/README
var/lib/vz/dump/vps.conf/etc/quotatab
var/lib/vz/dump/vps.conf/etc/apache2/
var/lib/vz/dump/vps.conf/etc/apache2/sites-available/
var/lib/vz/dump/vps.conf/etc/apache2/sites-available/default
var/lib/vz/dump/vps.conf/etc/apache2/sites-available/default-ssl
var/lib/vz/dump/vps.conf/etc/apache2/httpd.conf
var/lib/vz/dump/vps.conf/etc/apache2/sites-enabled/
var/lib/vz/dump/vps.conf/etc/apache2/sites-enabled/000-default
var/lib/vz/dump/vps.conf/etc/apache2/mods-available/
var/lib/vz/dump/vps.conf/etc/apache2/mods-available/disk_cache.conf
var/lib/vz/dump/vps.conf/etc/apache2/mods-available/alias.conf
var/lib/vz/dump/vps.conf/etc/apache2/mods-available/ssl.load
var/lib/vz/dump/vps.conf/etc/apache2/mods-available/dav_fs.load
var/lib/vz/dump/vps.conf/etc/apache2/mods-available/authnz_ldap.load
 
those paths are not correct.. check an existing backup or template archive to see how it is supposed to look like.
 
Thank you for all suggestions. I have passed now restore a process. Container at the end of process showed up in GUI.
When I start in GUI or CLI I get following messages:

pct restore 8888 8888.tar --rootfs containers:70 -storage containers
extracting archive '/var/lib/vz/dump/8888.tar'
Total bytes read: 56113152000 (53GiB, 173MiB/s)
Detected container architecture: amd64


root@ns3111642:~# pct start 8888
Job for pve-container@8888.service failed because the control process exited with error code.
See "systemctl status pve-container@8888.service" and "journalctl -xe" for details.
command 'systemctl start pve-container@8888' failed: exit code 1
root@ns3111642:~#


root@ns3111642:~# systemctl status pve-container@8888.service
pve-container@8888.service - PVE LXC Container: 8888
Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-12-01 01:21:08 UTC; 7s ago
Docs: man:lxc-start
man:lxc
man:pct
Process: 25635 ExecStart=/usr/bin/lxc-start -n 8888 (code=exited, status=1/FAILURE)

gru 01 01:21:07 ns3111642 systemd[1]: Starting PVE LXC Container: 8888...
gru 01 01:21:08 ns3111642 systemd[1]: pve-container@8888.service: Control process exited, code=exited status=1
gru 01 01:21:08 ns3111642 systemd[1]: pve-container@8888.service: Killing process 25638 (lxc-start) with signal SIGKILL.
gru 01 01:21:08 ns3111642 systemd[1]: pve-container@8888.service: Killing process 25741 (apparmor_parser) with signal SIG
gru 01 01:21:08 ns3111642 systemd[1]: Failed to start PVE LXC Container: 8888.
gru 01 01:21:08 ns3111642 systemd[1]: pve-container@8888.service: Unit entered failed state.
gru 01 01:21:08 ns3111642 systemd[1]: pve-container@8888.service: Failed with result 'exit-- Subject: pve-container@8888.service -- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- pve-container@8888.service
--
-- failed.
gru 01 01:21:08 ns3111642 systemd[1]: pve-container@8888.service: Unit entered failed state.
gru 01 01:21:08 ns3111642 pvedaemon[14057]: unable to get PID for CT 8888 (not running?)
gru 01 01:21:08 ns3111642 systemd[1]: pve-container@8888.service: Failed with result 'exit-code'.
gru 01 01:21:08 ns3111642 pct[25633]: command 'systemctl start pve-container@8888' failed: exit code 1
gru 01 01:21:08 ns3111642 pct[25632]:
vzstart:8888:root@pam
gru 01 01:21:08 ns3111642 systemd-timesyncd[1147]: Synchronized to time server 213.251.128.249:123 (ntp.ovh.net).
gru 01 01:21:11 ns3111642 sshd[25842]: Invalid user deploy from 212.224.125.240 port 34395
gru 01 01:21:11 ns3111642 sshd[25842]: input_userauth_request: invalid user deploy [preauth]
gru 01 01:21:11 ns3111642 sshd[25842]: pam_unix(sshd:auth): check pass; user unknown
gru 01 01:21:11 ns3111642 sshd[25842]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=
gru 01 01:21:13 ns3111642 sshd[25842]: Failed password for invalid user deploy from 212.224.125.240 port 34395 ssh2
gru 01 01:21:13 ns3111642 sshd[25842]: Received disconnect from 212.224.125.240 port 34395:11: Bye Bye [preauth]
gru 01 01:21:13 ns3111642 sshd[25842]: Disconnected from 212.224.125.240 port 34395 [preauth]
gru 01 01:21:42 ns3111642 sshd[26217]: Invalid user o2 from 51.68.140.29 port 58150
gru 01 01:21:42 ns3111642 sshd[26217]: input_userauth_request: invalid user o2 [preauth]
gru 01 01:21:42 ns3111642 sshd[26217]: pam_unix(sshd:auth): check pass; user unknown
gru 01 01:21:42 ns3111642 sshd[26217]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=
gru 01 01:21:44 ns3111642 sshd[26217]: Failed password for invalid user o2 from 51.68.140.29 port 58150 ssh2
gru 01 01:21:44 ns3111642 sshd[26217]: Received disconnect from 51.68.140.29 port 58150:11: Bye Bye [preauth]
gru 01 01:21:44 ns3111642 sshd[26217]: Disconnected from 51.68.140.29 port 58150 [preauth]
lines 1539-1575/1575 (END)-code'.


/var/lib/vz/dump# cd 8888
/var/lib/vz/dump/8888# cd etc
/var/lib/vz/dump/8888/etc# cd vzdump
/var/lib/vz/dump/8888/etc/vzdump# ls
pct.conf
/var/lib/vz/dump/8888/etc/vzdump# cat pct.conf
arch: amd64
cores: 1
hostname: serv100
memory: 512
net0: name=eth0,bridge=vmbr0,hwaddr=EA:05:03:EB:0A:4E,type=veth
ostype: ubuntu
rootfs: containers:subvol-105-disk-0,size=70G
swap: 512
 
Here is output of CLI and I attached log file.
Thx

lxc-start -n 8888 -F -l DEBUG -o /tmp/lxc-8888.log
lxc-start: 8888: cgroups/cgfsng.c: create_path_for_hierarchy: 1211 The cgroup "/sys/fs/cgroup/systemd//lxc/8888" already existed
lxc-start: 8888: cgroups/cgfsng.c: cgfsng_create: 1325 Failed to create cgroup "/sys/fs/cgroup/systemd//lxc/8888"
lxc-start: 8888: cgroups/cgfsng.c: create_path_for_hierarchy: 1211 The cgroup "/sys/fs/cgroup/systemd//lxc/8888-1" already existed
lxc-start: 8888: cgroups/cgfsng.c: cgfsng_create: 1325 Failed to create cgroup "/sys/fs/cgroup/systemd//lxc/8888-1"
lxc-start: 8888: cgroups/cgfsng.c: create_path_for_hierarchy: 1211 The cgroup "/sys/fs/cgroup/systemd//lxc/8888-2" already existed
lxc-start: 8888: cgroups/cgfsng.c: cgfsng_create: 1325 Failed to create cgroup "/sys/fs/cgroup/systemd//lxc/8888-2"
lxc-start: 8888: cgroups/cgfsng.c: create_path_for_hierarchy: 1211 The cgroup "/sys/fs/cgroup/systemd//lxc/8888-3" already existed
lxc-start: 8888: cgroups/cgfsng.c: cgfsng_create: 1325 Failed to create cgroup "/sys/fs/cgroup/systemd//lxc/8888-3"
lxc-start: 8888: sync.c: __sync_wait: 59 An error occurred in another process (expected sequence number 7)
lxc-start: 8888: start.c: __lxc_start: 1948 Failed to spawn container "8888"
lxc-start: 8888: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: 8888: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
 

Attachments

  • lxc-8888.log
    23 KB · Views: 2
the log says what is wrong - your rootfs does not contain an /sbin/init - most likely your doctored template/backup archive still has wrong paths and init ended up somewhere else.
 
I'm sorry to revive and old-ish thread but I also had the problem that was silently solved in pct restore: unable to parse config line and couldn't find the solution until I read the source code of src/PVE/LXC/Create.pm:

Code:
    if ($archive =~ /\.tar(\.[^.]+)?$/) {
        if (defined($1)) {
        @compression_opt = $compression_map{$1}
            or die "unrecognized compression format: $1\n";
        }
    } else {
        die "file does not look like a template archive: $archive\n";
    }

pct restore won't accept a .tgz file.

The solution for me was to rename the file so that it ends in .tar.gz instead of .tgz.
 
  • Like
Reactions: grin
I'm sorry to revive and old-ish thread but I also had the problem that was silently solved in pct restore: unable to parse config line and couldn't find the solution until I read the source code of src/PVE/LXC/Create.pm:
...

pct restore won't accept a .tgz file.

The solution for me was to rename the file so that it ends in .tar.gz instead of .tgz.

Exactly same happened to me. PVE openvz created a tgz while pct cannot handle it, only tar.gz.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!