LXD to Proxmox

lmuradyan

New Member
Apr 15, 2020
9
1
3
43
Hi, I am trying to move LXD containers to PROXMOX, but constantly facing errors.
The workaround is following:
On LXD host I created an image using "lxc publish" command then then exported using "lxc image export" command.
After that I uploaded the TARBALL-ed file to PROXMOX host and tried to create a CT using web interface but getting following error

Code:
extracting archive '/var/lib/vz/template/cache/devmoodle.tar.gz'
Total bytes read: 14181140480 (14GiB, 124MiB/s)
Architecture detection failed: open '/bin/sh' failed: No such file or directory

Falling back to amd64.
Use `pct set VMID --arch ARCH` to change.
TASK ERROR: unable to create CT 105 - unable to detect OS distribution
As far as I can see the '/bin/sh/' is present, but it is a symlink to 'dash'

Can somebody guide me in on how to migrate LXD container to proxmox?
 
Can somebody guide me in on how to migrate LXD container to proxmox?
Set the architecture to unmanaged. This way Proxmox VE doesn't manage the container settings.
 
I am still doing something wrong :(

I am running
Code:
pct create 106 /var/lib/vz/template/cache/devmoodle.tar.gz --storage local-ZFS --unprivileged=0 --ostype unmanaged

and getting a lot of errors like:

Code:
tar: rootfs/etc/apparmor/init/network-interface-security: Cannot change mode to rwxr-xr-x: Disk quota exceeded
tar: rootfs/etc/apparmor/init: Cannot utime: Disk quota exceeded
tar: rootfs/etc/apparmor/init: Cannot change ownership to uid 0, gid 0: Disk quota exceeded
tar: rootfs/etc/apparmor/init: Cannot change mode to rwxr-xr-x: Disk quota exceeded
tar: rootfs/etc/apparmor: Cannot utime: Disk quota exceeded
tar: rootfs/etc/apparmor: Cannot change ownership to uid 0, gid 0: Disk quota exceeded
tar: rootfs/etc/apparmor: Cannot change mode to rwxr-xr-x: Disk quota exceeded
tar: rootfs/etc/alternatives: Cannot utime: Disk quota exceeded
tar: rootfs/etc/alternatives: Cannot change ownership to uid 0, gid 0: Disk quota exceeded
tar: rootfs/etc/alternatives: Cannot change mode to rwxr-xr-x: Disk quota exceeded
tar: rootfs/etc: Cannot utime: Disk quota exceeded
tar: rootfs/etc: Cannot change ownership to uid 0, gid 0: Disk quota exceeded
tar: rootfs/etc: Cannot change mode to rwxr-xr-x: Disk quota exceeded
tar: rootfs/dev: Cannot utime: Disk quota exceeded
tar: rootfs/dev: Cannot change ownership to uid 0, gid 0: Disk quota exceeded
tar: rootfs/dev: Cannot change mode to rwxr-xr-x: Disk quota exceeded
tar: rootfs/bin: Cannot utime: Disk quota exceeded
tar: rootfs/bin: Cannot change ownership to uid 0, gid 0: Disk quota exceeded
tar: rootfs/bin: Cannot change mode to rwxr-xr-x: Disk quota exceeded
tar: rootfs: Cannot utime: Disk quota exceeded
tar: rootfs: Cannot change ownership to uid 0, gid 0: Disk quota exceeded
tar: rootfs: Cannot change mode to rwxr-xr-x: Disk quota exceeded
Total bytes read: 14181140480 (14GiB, 132MiB/s)
tar: Exiting with failure status due to previous errors
unable to create CT 106 - command 'tar xpf - -z --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/106/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2

I guess that there is no enough space for the CT. But when I am adding the size option

Code:
pct create 106 /var/lib/vz/template/cache/devmoodle.tar.gz --storage local-ZFS --size=50G --unprivileged=0 --ostype unmanaged

I am getting an error :

Code:
Unknown option: size
400 unable to parse option
pct create <vmid> <ostemplate> [OPTIONS]

It is possible to point on how to specify the size of container?
 
Get to make an CT using this command:

Code:
pct create 106 /var/lib/vz/template/cache/devmoodle.tar.gz --rootfs local-ZFS:50 --ostype unmanaged

Got this output:

Code:
extracting archive '/var/lib/vz/template/cache/devmoodle.tar.gz'
Total bytes read: 14181140480 (14GiB, 126MiB/s)
Architecture detection failed: open '/bin/sh' failed: No such file or directory

Falling back to amd64.
Use `pct set VMID --arch ARCH` to change.

But it does not starts. The error is following:

Code:
● pve-container@106.service - PVE LXC Container: 106
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2020-04-15 23:21:39 CEST; 19s ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 3606 ExecStart=/usr/bin/lxc-start -n 106 (code=exited, status=1/FAILURE)

Apr 15 23:21:38 ayb-htz-fsn1-dc6-main01 systemd[1]: Starting PVE LXC Container: 106...
Apr 15 23:21:39 ayb-htz-fsn1-dc6-main01 lxc-start[3606]: lxc-start: 106: lxccontainer.c: wait_on_daemonized_start: 87
Apr 15 23:21:39 ayb-htz-fsn1-dc6-main01 lxc-start[3606]: lxc-start: 106: tools/lxc_start.c: main: 329 The container f
Apr 15 23:21:39 ayb-htz-fsn1-dc6-main01 lxc-start[3606]: lxc-start: 106: tools/lxc_start.c: main: 332 To get more det
Apr 15 23:21:39 ayb-htz-fsn1-dc6-main01 lxc-start[3606]: lxc-start: 106: tools/lxc_start.c: main: 335 Additional info
Apr 15 23:21:39 ayb-htz-fsn1-dc6-main01 systemd[1]: pve-container@106.service: Control process exited, code=exited, s
Apr 15 23:21:39 ayb-htz-fsn1-dc6-main01 systemd[1]: pve-container@106.service: Killing process 3617 (lxc-start) with
Apr 15 23:21:39 ayb-htz-fsn1-dc6-main01 systemd[1]: pve-container@106.service: Failed with result 'exit-code'.
Apr 15 23:21:39 ayb-htz-fsn1-dc6-main01 systemd[1]: Failed to start PVE LXC Container: 106.
lines 1-17/17 (END)

What is wrong?
 
It seems a part of your lines in the screendump is chopped of?
Try using
Code:
systemctl status -l pve-container@106.service
instead.
 
Yes. It was, a little bit chopped. The result of systemctl status -l pve-container@106.service is this:

Code:
root@ayb-htz-fsn1-dc6-main01 ~ # systemctl status -l pve-container@106.service
● pve-container@106.service - PVE LXC Container: 106
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2020-04-16 00:03:33 CEST; 10h ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 8029 ExecStart=/usr/bin/lxc-start -n 106 (code=exited, status=1/FAILURE)

Apr 16 00:03:31 ayb-htz-fsn1-dc6-main01 systemd[1]: Starting PVE LXC Container: 106...
Apr 16 00:03:33 ayb-htz-fsn1-dc6-main01 lxc-start[8029]: lxc-start: 106: lxccontainer.c: wait_on_daemonized_start: 874 Received container state "ABORTING" instead of "RUNNING"
Apr 16 00:03:33 ayb-htz-fsn1-dc6-main01 lxc-start[8029]: lxc-start: 106: tools/lxc_start.c: main: 329 The container failed to start
Apr 16 00:03:33 ayb-htz-fsn1-dc6-main01 lxc-start[8029]: lxc-start: 106: tools/lxc_start.c: main: 332 To get more details, run the container in foreground mode
Apr 16 00:03:33 ayb-htz-fsn1-dc6-main01 lxc-start[8029]: lxc-start: 106: tools/lxc_start.c: main: 335 Additional information can be obtained by setting the --logfile and --logpriority options
Apr 16 00:03:33 ayb-htz-fsn1-dc6-main01 systemd[1]: pve-container@106.service: Control process exited, code=exited, status=1/FAILURE
Apr 16 00:03:33 ayb-htz-fsn1-dc6-main01 systemd[1]: pve-container@106.service: Killing process 8038 (lxc-start) with signal SIGKILL.
Apr 16 00:03:33 ayb-htz-fsn1-dc6-main01 systemd[1]: pve-container@106.service: Failed with result 'exit-code'.
Apr 16 00:03:33 ayb-htz-fsn1-dc6-main01 systemd[1]: Failed to start PVE LXC Container: 106.
lines 1-17/17 (END)
 
how is your tar archive structured? check out one of the templates provided by us, and then check yours.. (e.g., with tar tf <ARCHIVE> | head -n 100)
 
They are a little bit different.

This is the structure of your template:

Code:
./etc/appliance.info
./
./bin/
./bin/pidof
./bin/sed
./bin/kill
./bin/ps
./bin/dmesg
./bin/findmnt
./bin/lsblk
./bin/more
./bin/mountpoint
./bin/wdctl
./bin/mount
./bin/umount
./bin/login
./bin/su
./bin/hostname
./bin/dnsdomainname
./bin/domainname
./bin/nisdomainname
./bin/ypdomainname
./bin/gunzip
./bin/gzexe
./bin/gzip
./bin/uncompress
./bin/zcat
./bin/zcmp
./bin/zdiff
./bin/zegrep
./bin/zfgrep
./bin/zforce
./bin/zgrep
./bin/zless
./bin/zmore
./bin/znew
./bin/egrep
./bin/fgrep
./bin/grep
./bin/tar
./bin/dash
./bin/cat
./bin/chgrp
./bin/chmod
./bin/chown
./bin/cp
./bin/date
./bin/dd
./bin/df
./bin/dir
./bin/echo
./bin/false
./bin/ln
./bin/ls
./bin/mkdir
./bin/mknod
./bin/mktemp
./bin/mv
./bin/pwd
./bin/readlink
./bin/rm
./bin/rmdir
./bin/sleep
./bin/stty
./bin/sync
./bin/touch
./bin/true
./bin/uname
./bin/vdir
./bin/run-parts
./bin/tempfile
./bin/which
./bin/bash
./bin/rbash
./bin/sh
./bin/sh.distrib
./bin/bzcat
./bin/whiptail
./bin/bzdiff
./bin/nc.openbsd
./bin/bzfgrep
./bin/ntfs-3g
./bin/bzexe
./bin/ping
./bin/ping4
./bin/ping6
./bin/ntfssecaudit
./bin/static-sh
./bin/systemd-hwdb
./bin/udevadm
./bin/bzgrep
./bin/kmod
./bin/lsmod
./bin/systemd-tmpfiles
./bin/systemd-tty-ask-password-agent
./bin/systemd
./bin/unicode_start
./bin/setupcon
./bin/bzmore
./bin/bzcmp

And this is the one which is created using "lxc image export" command:

Code:
metadata.yaml
rootfs
rootfs/bin
rootfs/bin/bash
rootfs/bin/btrfs
rootfs/bin/btrfs-debug-tree
rootfs/bin/btrfs-find-root
rootfs/bin/btrfs-image
rootfs/bin/btrfs-map-logical
rootfs/bin/btrfs-select-super
rootfs/bin/btrfs-zero-log
rootfs/bin/btrfsck
rootfs/bin/btrfstune
rootfs/bin/bunzip2
rootfs/bin/busybox
rootfs/bin/bzcat
rootfs/bin/bzcmp
rootfs/bin/bzdiff
rootfs/bin/bzegrep
rootfs/bin/bzexe
rootfs/bin/bzfgrep
rootfs/bin/bzgrep
rootfs/bin/bzip2
rootfs/bin/bzip2recover
rootfs/bin/bzless
rootfs/bin/bzmore
rootfs/bin/cat
rootfs/bin/chacl
rootfs/bin/chgrp
rootfs/bin/chmod
rootfs/bin/chown
rootfs/bin/chvt
rootfs/bin/cp
rootfs/bin/cpio
rootfs/bin/dash
rootfs/bin/date
rootfs/bin/dd
rootfs/bin/df
rootfs/bin/dir
rootfs/bin/dmesg
rootfs/bin/dnsdomainname
rootfs/bin/domainname
rootfs/bin/dumpkeys
rootfs/bin/echo
rootfs/bin/ed
rootfs/bin/egrep
rootfs/bin/false
rootfs/bin/fgconsole
rootfs/bin/fgrep
rootfs/bin/findmnt
rootfs/bin/fsck.btrfs
rootfs/bin/fuser
rootfs/bin/fusermount
rootfs/bin/getfacl
rootfs/bin/grep
rootfs/bin/gunzip
rootfs/bin/gzexe
rootfs/bin/gzip
rootfs/bin/hostname
rootfs/bin/ip
rootfs/bin/journalctl
rootfs/bin/kbd_mode
rootfs/bin/kill
rootfs/bin/kmod
rootfs/bin/less
rootfs/bin/lessecho
rootfs/bin/lessfile
rootfs/bin/lesskey
rootfs/bin/lesspipe
rootfs/bin/ln
rootfs/bin/loadkeys
rootfs/bin/login
rootfs/bin/loginctl
rootfs/bin/lowntfs-3g
rootfs/bin/ls
rootfs/bin/lsblk
rootfs/bin/lsmod
rootfs/bin/mkdir
rootfs/bin/mkfs.btrfs
rootfs/bin/mknod
rootfs/bin/mktemp
rootfs/bin/more
rootfs/bin/mount
rootfs/bin/mountpoint
rootfs/bin/mt
rootfs/bin/mt-gnu
rootfs/bin/mv
rootfs/bin/nano
rootfs/bin/nc
rootfs/bin/nc.openbsd
rootfs/bin/netcat
rootfs/bin/netstat
rootfs/bin/networkctl
rootfs/bin/nisdomainname
rootfs/bin/ntfs-3g
rootfs/bin/ntfs-3g.probe
rootfs/bin/ntfscat
rootfs/bin/ntfscluster
rootfs/bin/ntfscmp
rootfs/bin/ntfsfallocate

How I can convert mine to the correct format? Or how I can convert LXD container live machine to PROXMOX template?
 
The main reason the "import" fails is because the whole file system is suddenly under an extra directory.
Might be enough to just extract it and re-archive only the rootfs contents. I'd recommend also removing any dev/ contents from the image as those may fail to extract if you're using unprivileged containers.
Code:
~ # mkdir conversion-temp-dir
~ # cd conversion-temp-dir
~/conversion-temp-dir # tar xpf /path/to/lxd-image.tar.gz
~/conversion-temp-dir # ls -ld rootfs
<make sure this shows a directory owned by root, not some unkown user>
~/conversion-temp-dir # rm -rf ./rootfs/dev/
~/conversion-temp-dir # tar cpzf /path/to/desired-pmx-image.tar.gz -C rootfs/ .
~/conversion-temp-dir # cd ..
~ # rm -rf conversion-temp-dir
but I haven't tested it - in theory the unmanaged mode should not be necessary if it's a distribution type we support.
 
  • Like
Reactions: BlueRaccoonTech

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!