pty issue with ve 4 containers (converted from 3)

gimpbully

Renowned Member
Aug 7, 2015
21
0
66
Upgraded a machine to ve4. Previous to the upgrade, I took backups of all openvz containers and am trying to restore in ve 4 w/ pct restore. Most went fine, but a few give me this on attempting to ssh to the container:
root@usenet's password:
PTY allocation request failed on channel 0
Linux usenet 4.1.3-1-pve #1 SMP Thu Jul 30 08:54:37 CEST 2015 i686


The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.


Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
You have mail.
stdin: is not a tty

----snip-----
It just hangs there indefinitely. Further, I'm not getting any content when trying to get a console (via the GUI, using pct enter works just fine). Any ideas?
 
lxc.pts=1

in my config fixed it

edit: not entirely, the console from the gui still doesn't get anything.
 
Last edited:
lxc.pts=1

in my config fixed it

edit: not entirely, the console from the gui still doesn't get anything.

Are you sure? Just ask because the default is 'lxc.pts = 1024', and docs claim actual value is not used at all.
 
Yep, did several tests back and forth. Without the value in the config, I get the ssh issue. I still have yet to get a console via GUI. It's weird, I did a migration the other day without any issues (just tar'd up the vz containers and restored them on the new install). But this migration simply wouldn't accept backups made in that fashion, complained about a missing config in the archive. I could only use prox generated backups (taken via gui and restored with pct).
 
Setting it to anything above '0' allows me to log in via ssh. even 1 supports multiple ssh sessions.
ommitting the setting produces the same behavior as setting it to 0.
If you want me to test any specific config, please let me know, happy to give it a whirl.
 
lxc.arch = i386
lxc.cgroup.cpu.cfs_period_us = 100000
lxc.cgroup.cpu.cfs_quota_us = 100000
lxc.cgroup.cpu.shares = 1024
lxc.cgroup.memory.limit_in_bytes = 4294967296
lxc.cgroup.memory.memsw.limit_in_bytes = 5368709120
lxc.rootfs = loop:/storage/datastore/images/101/vm-101-rootfs.raw
lxc.utsname = usenet
pve.comment = usenet
pve.disksize = 20
pve.nameserver = 10.0.0.1
pve.onboot = 1
pve.searchdomain = local
pve.volid = datastore:101/vm-101-rootfs.raw
lxc.network.type = veth
pve.network.bridge = vmbr0
pve.network.gw = 10.0.0.1
lxc.network.hwaddr = 1E:EE:10:9A:ED:7A
pve.network.ip = 10.0.0.51/24
lxc.network.name = eth0
pve.network.tag = 1
lxc.network.veth.pair = veth101.0
# more than 1 mount.entry not supported yet
#lxc.mount.entry = /storage storage none bind 0 0
lxc.mount.entry = /storage/Uploads storage/Uploads none bind,create=dir 0 0
lxc.pts=1024
 
That does seem to have done the trick. Not sure how that was absent. I only added the mount and pts entries, the rest were generated by the pct restore process.
 
In addition to this topic. If you experience this error "Server refused to allocate pty" while connecting with SSH to the container, try the following. From the host-node connect to the container with the following command: "pct enter [container-ID]". Then edit /etc/rc.sysinit and look for this line: /sbin/start_udev
Remove that line and restart the container and you might be good to go!
This problem occured to me when migrating openvz templates based on centos 6 to Proxmox VE4.

Regards,
Ruben
 
  • Like
Reactions: Mario Aldayuz
... Then edit /etc/rc.sysinit and look for this line: /sbin/start_udev
Remove that line and restart the container and you might be good to go!
This problem occured to me when migrating openvz templates based on centos 6 to Proxmox VE4.

You saved my day!

THANK YOU VERY MUCH FOR THIS HINT!!!:D
 
  • Like
Reactions: Ruben Waitz
In addition to this topic. If you experience this error "Server refused to allocate pty" while connecting with SSH to the container, try the following. From the host-node connect to the container with the following command: "pct enter [container-ID]". Then edit /etc/rc.sysinit and look for this line: /sbin/start_udev
Remove that line and restart the container and you might be good to go!
This problem occured to me when migrating openvz templates based on centos 6 to Proxmox VE4.

Regards,
Ruben

You sir, are a savior.
 
  • Like
Reactions: Ruben Waitz
In addition to this topic. If you experience this error "Server refused to allocate pty" while connecting with SSH to the container, try the following. From the host-node connect to the container with the following command: "pct enter [container-ID]". Then edit /etc/rc.sysinit and look for this line: /sbin/start_udev
Remove that line and restart the container and you might be good to go!
This problem occured to me when migrating openvz templates based on centos 6 to Proxmox VE4.

Regards,
Ruben

This worked for me. Thanks for posting the fix!
 
  • Like
Reactions: Ruben Waitz
In addition to this topic. If you experience this error "Server refused to allocate pty" while connecting with SSH to the container, try the following. From the host-node connect to the container with the following command: "pct enter [container-ID]". Then edit /etc/rc.sysinit and look for this line: /sbin/start_udev
Remove that line and restart the container and you might be good to go!
This problem occured to me when migrating openvz templates based on centos 6 to Proxmox VE4.

Regards,
Ruben
Added to https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC#PTY_Allocation
Thanks for sharing Ruben
 
This does work but as soon as you run yum if it updates any networking it puts it back in guys


/sbin/start_udev
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!