Upgrading containers from 3.4 to 7.1, possible?

PicklesTheOtt

New Member
Jun 10, 2022
5
0
1
I have an old, OLD cluster with three separate email servers setup as containers in a v3.4 cluster. They've worked great for ages. Now, however, I've built a new cluster with v7.1. I was able to restore all the VMs from backups with no problem. The containers, however, are not on the network.

I followed the instructions at https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC to make sure I'm converting the container correctly. The end of the tutorial says "voila" but I get "nada." I add a network adapter with an address of 192.168.1.65/24 and try to ping it with no reply. When I connect to the container using "pct enter 1110" (console doesn't work, but I'm sure that's a different problem) and run "ip a" I get:
1: lo: <LOOPBACK> mtu 65536 qdisc noop qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eht0@if14: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop qlen 1000 link/ether 7e:c5:fa:a6:cc:8e brd ff:ff:ff:ff:ff:ff
No listing for the adapter I added from the command line or gui.

Is there a step I'm missing? Or is this some other compatibility issue?
 
hi,

* which distro is running in the containers?

* can you post a config file? pct config 1110

When I connect to the container using "pct enter 1110" (console doesn't work, but I'm sure that's a different problem) and run "ip a" I get:
* does it work after running ip link set eth0 up and/or running dhcp client?
 
hi,

* which distro is running in the containers?

* can you post a config file? pct config 1110

* does it work after running ip link set eth0 up and/or running dhcp clie
The containers are running CentOS 5.8 (vintage stuff).
The config for 1110 is:
arch: i386 cpulimit: 2 cpuunits: 1024 hostname: mail.domain.com memory: 1024 net0: name=eth0,bridge=vmbr0,gw=192.168.1.181,hwaddr=92:68:AE:CA:4B:1A,ip=192.168.1.82/24,type=veth ostype: centos rootfs: Ceph-Storage:vm-1110-disk-0,size=40G swap: 1536

In the container, I ran "ifup eth0" and the network adapter appeared to start. Here's the contents of /etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE=eth0 ONBOOT=yes UUID=92170944-ebd9-11ec-ac7c-6805ca059c0e BOOTPROTO=none IPADDR=192.168.1.82 NETMASK=255.255.255.0 GATEWAY=192.168.1.181 DNS1=192.168.1.181 DOMAIN=domain.com

But, for some reason, there's no default route. The only routes are:
Destination Gateway Genmask Flags Metric Ref Use Iface 169.254.0.0 * 255.255.0.0 U 0 0 0 eth0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0

I can manually add a default gateway, but it doesn't retain it on reboot. The network adapter, however, does appear to start properly after a reboot, but again, no default route.
 
Just wanted to thank you guys for your help. Moving to proper VMs is the ultimate goal, but in the mean time, I was able to get the containers working in a mostly normal state.

I booted the containers, then logged in using "pct enter <container>" and was able to start the network interface and services manually. After that, when I did a graceful shutdown of the container, it would reboot just fine. If it was stopped, however (either manually or if the node failed), I need to repeat what I did to get it working in the first place. Not a perfect solution, but it let me get everything running again until I can do a full migration.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!