Upgrading Ubuntu on LXC containers - problems, guidelines?

wipiton

New Member
Aug 22, 2018
3
0
1
47
Hello, everyone!

We have plans to upgrade Ubuntu version on a bunch of lxc containers, first from 14 to 16, after that, maybe, to 18.

Is do-release-upgrade enough? I've seen several threads here, describing bad consequences with it (container doesn't start).

We'd like to avoid such troubles. Are there any guidelines, how to do it properly? If there are, share a link please.

Thanx
 
All Debian-based systems can be (dist-)upgrade to the next version, although the smoothest transitions I experienced where from one Debian release to the next. We always had problems with Ubuntu - virtualised or not. Some were so bad that we decided do abandon Ubuntu and go with the older software-based Debian and never went back. Ubuntu LTS is a joke, because most of the software you want is not in the LTS repository, but in the multiverse, which is not LTS backed. If you want real LTS, use RHEL or Oracle Linux ... or even SLES.

Do a lot of test migrations from one version to the next in order to get a feeling what might break and how to circumvent this. I'd use just clones of the real machines and went with it. We cannot give any advice what could go wrong or not due to your software choices.
 
Thank you for your reply!

We're using Ubuntu on our servers all around for a long time and never had any critical problems with it.. It'll be a real pain to migrate all from Ubuntu to some other Linux..

As far I can judge, from [https://forum.proxmox.com/threads/container-doesnt-start-after-upgrade.46255], for example, there are some critical issues with Proxmox support of upgrading Ubuntu containers. And the absense of solution proposals concerns me most..
 
Thank you for your reply!

We're using Ubuntu on our servers all around for a long time and never had any critical problems with it.. It'll be a real pain to migrate all from Ubuntu to some other Linux..

As far I can judge, from [https://forum.proxmox.com/threads/container-doesnt-start-after-upgrade.46255], for example, there are some critical issues with Proxmox support of upgrading Ubuntu containers. And the absense of solution proposals concerns me most..

If you run up2date Proxmox VE, there are no issues. I assume you missed to update your host. Post your pveversion -v.
 
Hello,
I have issues with such upgrade. After do-release-upgrade from Ubuntu 14.04 to 16.04 containers refuse to start.
root@p12:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.18-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5
pve-kernel-4.15.18-2-pve: 4.15.18-20
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-24
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.1+pve2-1
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-29
pve-container: 2.0-25
pve-docs: 5.2-8
pve-firewall: 3.0-13
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-32
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
 
Last edited:
Hello,
I have the same issues. After do-release-upgrade from Ubuntu 14.04 to 16.04 containers refuse to start.
...

Pls provide detailed error logs and container config.
 
error log from journalctl -xe:

aug 23 22:23:00 p12 systemd-udevd[23409]: Could not generate persistent MAC address for vethI6H4WH: No such file or directory
aug 23 22:23:00 p12 kernel: IPv6: ADDRCONF(NETDEV_UP): veth131i0: link is not ready
aug 23 22:23:01 p12 CRON[21473]: pam_unix(cron:session): session closed for user root
aug 23 22:23:01 p12 CRON[23480]: pam_unix(cron:session): session opened for user root by (uid=0)
aug 23 22:23:01 p12 CRON[23481]: (root) CMD ( sh /etc/zabbix/plugins/iostat_collect.sh /var/tmp/zabbix_iostat 60 >/dev/null 2>&1)
aug 23 22:23:01 p12 kernel: vmbr429: port 5(veth131i0) entered blocking state
aug 23 22:23:01 p12 kernel: vmbr429: port 5(veth131i0) entered disabled state
aug 23 22:23:01 p12 kernel: device veth131i0 entered promiscuous mode
aug 23 22:23:01 p12 systemd[1]: Started Proxmox VE replication runner.

/etc/pve/lxc/131.conf:
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: crm.mydomain.com
memory: 1024
net0: name=eth0,bridge=vmbr429,gw=10.1.1.1,hwaddr=AE:B0:1F:AC:1B:91,ip=10.1.1.10/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: zfs:subvol-131-disk-2,size=48G
swap: 512
 
Hello, everybody!

That's exactly what I was saying.

Speaking generally - there are no critical issues with Ubuntu.
As it comes to details - no solution.

It makes me extremely hesitating with my own case...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!