SolusVM to Proxmox 5.3-12

TechSupportFI

New Member
Mar 29, 2019
4
1
3
45
Hi,

Read great things about proxmox, so i decided to go for it. Paid for a sub and installed. My issues right now are the following:

1- I can only restore vzdumps with the option --ostype unmanaged because it will return "unable to restore CT xxx - unable to detect OS distribution"

2- After restoring, they wont start.

Code:
systemctl status pve-container@116.service
● pve-container@116.service - PVE LXC Container: 116
   Loaded: loaded (/lib/systemd/system/pve-container@.service; static; vendor preset: enabled)
   Active: failed (Result: exit-code) since Fri 2019-03-29 21:41:31 UTC; 10min ago
     Docs: man:lxc-start
           man:lxc
           man:pct
  Process: 29579 ExecStart=/usr/bin/lxc-start -n 116 (code=exited, status=1/FAILURE)

Mar 29 21:41:31 ns505630 lxc-start[29579]: lxc-start: 116: lxccontainer.c: wait_on_daemonized_start: 865 Received container state "ABORTING" instead of "RUNNING"
Mar 29 21:41:31 ns505630 lxc-start[29579]: lxc-start: 116: tools/lxc_start.c: main: 330 The container failed to start
Mar 29 21:41:31 ns505630 lxc-start[29579]: lxc-start: 116: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
Mar 29 21:41:31 ns505630 lxc-start[29579]: lxc-start: 116: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
Mar 29 21:41:31 ns505630 systemd[1]: pve-container@116.service: Control process exited, code=exited status=1
Mar 29 21:41:31 ns505630 systemd[1]: pve-container@116.service: Killing process 29583 (3) with signal SIGKILL.
Mar 29 21:41:31 ns505630 systemd[1]: pve-container@116.service: Killing process 29618 (3) with signal SIGKILL.
Mar 29 21:41:31 ns505630 systemd[1]: Failed to start PVE LXC Container: 116.
Mar 29 21:41:31 ns505630 systemd[1]: pve-container@116.service: Unit entered failed state.
Mar 29 21:41:31 ns505630 systemd[1]: pve-container@116.service: Failed with result 'exit-code'.

and

Code:
 journalctl -xe
-- Unit pve-container@116.service has begun starting up.
Mar 29 22:01:36 ns505630 audit[20613]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="/usr/bin/lxc-start" name="lxc-116_</var/lib/lxc>" pid=20613 comm="apparmor_parser
Mar 29 22:01:36 ns505630 kernel: audit: type=1400 audit(1553896896.072:20): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="/usr/bin/lxc-start" name="lxc-116_</var/lib/lxc>
Mar 29 22:01:36 ns505630 lxc-start[20602]: lxc-start: 116: lxccontainer.c: wait_on_daemonized_start: 865 Received container state "ABORTING" instead of "RUNNING"
Mar 29 22:01:36 ns505630 lxc-start[20602]: lxc-start: 116: tools/lxc_start.c: main: 330 The container failed to start
Mar 29 22:01:36 ns505630 lxc-start[20602]: lxc-start: 116: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
Mar 29 22:01:36 ns505630 lxc-start[20602]: lxc-start: 116: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
Mar 29 22:01:36 ns505630 systemd[1]: pve-container@116.service: Control process exited, code=exited status=1
Mar 29 22:01:36 ns505630 systemd[1]: pve-container@116.service: Killing process 20606 (3) with signal SIGKILL.
Mar 29 22:01:36 ns505630 systemd[1]: pve-container@116.service: Killing process 20654 (3) with signal SIGKILL.
Mar 29 22:01:36 ns505630 systemd[1]: Failed to start PVE LXC Container: 116.
-- Subject: Unit pve-container@116.service has failed
-- Defined-By: systemd
--
-- Unit pve-container@116.service has failed.
--
-- The result is failed.
Mar 29 22:01:36 ns505630 systemd[1]: pve-container@116.service: Unit entered failed state.
Mar 29 22:01:36 ns505630 systemd[1]: pve-container@116.service: Failed with result 'exit-code'.

'.

The backups fail with Centos and Debian (7 and 8)

Could you help?
 
Last edited:
Hi, i've migrated about 80 containers from SolusVM to Proxmox.

First converted from ploop to simfs (This is the reason Proxmox cannot detect OS) and make a dump with vzdump.

Then imported to Proxmox with a script like this.

Code:
pct restore $1 vzdump-$1.tar --storage local-zfs --arch amd64 --hostname $hostname --ostype ubuntu  --cores 1 --memory 8192 --swap 2048 --nameserver 8.8.8.8,8.8.4.4 --searchdomain domain.net --onboot yes
pct set $1 --net0 name=eth0,bridge=vmbr1,ip=$ip/25,gw=x.x.x.x
pct mount $1
rm /var/lib/lxc/$1/rootfs/etc/network/interfaces
echo \"# UNCONFIGURED FSTAB FOR BASE SYSTEM\" > /var/lib/lxc/$1/rootfs/etc/fstab
pct unmount $1
pct start $1

The CTs worked perfectly but replication was slow (in our case) - https://forum.proxmox.com/threads/zfs-slow-replication.52325/

The solution was convert to unprivileged with a backup. Yo can avoid this restoring directly (if you want unprivileged containers) using

Code:
 pct restore $1 $fichero -ignore-unpack-errors 1 -unprivileged

Hope this can help you.
 
Hi Jota, thank you for you answer.

What if i wasnt smart, only have the one server, and the only thing i have now is the original ploop backups. is there anything i can do?
 
I can think of two things:

1) Try to restore your vzdump with my previous commands and change ostype to unmanaged and make the other commands (mount filesystem, delete network config and change fstab file) and try boot.

2) Virtualize a SolusVM master on Proxmox, restore vzdump and convert again to simfs and make correct vzdump.
 
Thank you, i will try this. I had a look at conversion scripts from ploop to simfs and they seemed pretty old. Will they still work?

Like this one:

Code:
#!/bin/sh
# ./convert_ploop_to_simfs.sh VEID
# chmod +x convert_ploop_to_simfs.sh
rsync_options='-aHAX --progress --stats --numeric-ids --delete'
partition='vz'
if [ ! -e /etc/vz/conf/$1.conf ]; then
        echo "Virtual server configuration file: /etc/vz/conf/$1.conf does not exist."
        exit 1
fi
if [ ! -d /$partition/private/$1/root.hdd ]; then
        echo "Server does not have a ploop device"
        exit 1
fi
if [ ! -d /$partition/private/$1 ]; then
        echo "Server does not exist"
        exit 1
fi
# Get disk space in G of current VPS
#disk=`vzctl exec $1 df -BG | grep ploop | awk {'print $2'} | head -n1`
#if [ ! $disk ]; then
#        echo "Could not retrieve disk space figure.  Is VPS running?"
#        exit 1
#fi
# Create and mount file system
mkdir -p /$partition/private/1000$1/
#ploop init -s $disk /$partition/private/1000$1/root.hdd/root.hdd
cp /etc/vz/conf/$1.conf /etc/vz/conf/1000$1.confcd /vz
vzctl mount 1000$1
# Rsync over files (sync 1)
rsync $rsync_options /$partition/root/$1/. /$partition/private/1000$1/
# Stop primary, mount, sync final
vzctl stop $1
vzctl mount $1
rsync $rsync_options /$partition/root/$1/. /$partition/private/1000$1/
vzctl umount $1
vzctl umount 1000$1
mv /$partition/private/$1 /$partition/private/$1.backup
mv /$partition/private/1000$1 /$partition/private/$1
vzctl start $1
# Cleanup
rm -f /etc/vz/conf/1000$1.conf
rmdir /vz/root/1000$1
# Verification
verify=`vzlist -H -o status $1`
if [ $verify = "running" ]; then
        echo "Virtual server conversion successful.  Verify manually then run: rm -Rf /$partition/private/$1.backup to remove backup."
else
        echo "Server conversion was not successful..Reverting.."
        mv -f /$partition/private/$1 /$partition/private/$1.fail
        mv /$partition/private/$1.backup /$partition/private/$1
        vzctl start $1
fi
 
Have you tried first to delete network config file and modify fstab file and try boot?

My script to convert to simfs is based on your script. Here is, with small changes

Code:
#!/bin/sh
# ./convert_ploop_to_simfs.sh CTID
# chmod +x ./convert_ploop_to_simfs.sh

# Check parameters
if [ $# -eq 0 ]; then
    echo "No arguments supplied. Usage $0 ctid"
    exit 1
fi

# Options
rsync_options='-aHAX --progress --stats --numeric-ids --delete'
partition='vz'

# Check needed files
if [ ! -e /etc/vz/conf/$1.conf ]; then
        echo "Virtual server configuration file: /etc/vz/conf/$1.conf does not exist."
        exit 1
fi
if [ ! -d /$partition/private/$1/root.hdd ]; then
        echo "Server does not have a ploop device"
        exit 1
fi
if [ ! -d /$partition/private/$1 ]; then
        echo "Server does not exist"
        exit 1
fi

# Get disk space in G of current VPS
disk=`/usr/sbin/vzctl exec $1 df | /bin/grep ploop | /bin/awk {'print $2'} | /usr/bin/head -n1`
if [ ! $disk ]; then
        echo "Could not retrieve disk space figure. Is VPS running?"
       exit 1
fi

# Create and mount file system
mkdir -p /$partition/private/1000$1/

# Stop primary, backup, mount, sync and umount
/usr/sbin/vzctl stop $1
# /usr/sbin/vzdump --compress --dumpdir /root/ $1
/usr/sbin/vzctl mount $1
/usr/bin/rsync $rsync_options /$partition/root/$1/. /$partition/private/1000$1/
/usr/sbin/vzctl umount $1

# Swap partitions
mv /$partition/private/$1 /$partition/private/$1.backup
mv /$partition/private/1000$1 /$partition/private/$1

# Make changes in config file
/bin/sed -i 's/ploop/simfs/g' /etc/vz/conf/$1.conf
/usr/sbin/vzctl set $1 --diskspace $disk:$disk --diskinodes $((disk*200000)):$((disk*200000)) --save

# Start VM
/usr/sbin/vzctl start $1

# Check if running
verify=`/usr/sbin/vzlist -H -o status $1`
if [ $verify = "running" ]; then
    echo "Virtual server conversion successful.  Verify manually then run: rm -Rf /$partition/private/$1.backup to remove backup."
    exit 0
else
    echo "Server conversion was not successful.. Reverting.."
    mv -f /$partition/private/$1 /$partition/private/$1.fail
    mv /$partition/private/$1.backup /$partition/private/$1
    /bin/sed -i 's/simfs/ploop/g' /etc/vz/conf/$1.conf
    /usr/sbin/vzctl start $1
    exit -1
fi
 
  • Like
Reactions: TechSupportFI
Thanks, ill give this a try.

**Edit: Script works great. Im going to need to restore the whole SolusVM setup, convert and then dump everything again before reinstalling proxmox. Thanks for the help, i'll be back on my next fail :D
 
Last edited:
  • Like
Reactions: Jota V.
Thanks, ill give this a try.

**Edit: Script works great. Im going to need to restore the whole SolusVM setup, convert and then dump everything again before reinstalling proxmox. Thanks for the help, i'll be back on my next fail :D

Hope you do not have more problems ;-)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!