A note to folks who might find this ticket - this ended up being an MTU issue that seems to have been tickled by a more strict behavior on the upgraded software. Prior to the 7 upgrade, we were setting the underlying bond phy interfaces to 9k via this sort of syntax under the bond...
We've hit a pretty strange issue after doing a 6to7 upgrade on a system using openvswitch and a ceph rbd pool defined for an external ceph cluster.
Upon boot, for a brief second or two, the ceph storage is available but as soon as VMs start bringing up their vswitch ports, the ceph storage is...
It really does seem to be a timeout thing. Is there a way to manually change the timeout for the GUI or qm startup?
real 1m50.741s
user 0m0.044s
sys 0m0.004s
I have a proxmox node with a Mellanox Connectx-3 card set up for SR-IOV and VIP interfaces. Works fine for most cases, I can pass a VIP each to several VMs but once I configure a VM for greater than 16GB, the VM times out when trying to start via the gui or qm commant. Curiously, it does...
took a backup of a container running on a dir storage type and tried to restore to an lvm type (all via gui):
Use of uninitialized value $disksize in division (/) at /usr/share/perl5/PVE/API2/LXC.pm line 265.
TASK ERROR: unable to detect disk size - please specify rootfs (size)
Updated from the last beta to the stable ve 4 and got the following on trying to start any of my containers:
root@eduardo:~# pct start 100
vm 100 - unable to parse value of 'net0' - format error at /usr/share/perl5/PVE/JSONSchema.pm line 529
tag: value must have a minimum value of 2
format...
Just upgraded from ve4b1 to ve4b2. Had scheduled backups already set up but a manual backup test. It's got a mountpoint defined, as you'll see, and gave me this:
INFO: starting new backup job: vzdump 100 --node eduardo --remove 0 --storage datastore --mode snapshot --compress gzip
INFO...
That does seem to have done the trick. Not sure how that was absent. I only added the mount and pts entries, the rest were generated by the pct restore process.
Setting it to anything above '0' allows me to log in via ssh. even 1 supports multiple ssh sessions.
ommitting the setting produces the same behavior as setting it to 0.
If you want me to test any specific config, please let me know, happy to give it a whirl.
Yep, did several tests back and forth. Without the value in the config, I get the ssh issue. I still have yet to get a console via GUI. It's weird, I did a migration the other day without any issues (just tar'd up the vz containers and restored them on the new install). But this migration...
Upgraded a machine to ve4. Previous to the upgrade, I took backups of all openvz containers and am trying to restore in ve 4 w/ pct restore. Most went fine, but a few give me this on attempting to ssh to the container:
root@usenet's password:
PTY allocation request failed on channel 0
Linux...
I've got an nvidia card on my host. I recently reinstalled with ve4 and am getting the following on boot:
[screen goes white on green from white on black]:
[ 58.599324] nvidiafb: unable to setup MTRR
It just locks up here. Removing the card and booting works fine.
What is the 'proxmox way' of adding a tun device to an lxc container on boot?
I tried putting "lxc.cgroup.devices.allow = c 10:200 rwm" in the config for the container, but the gui was pretty upset about that, said it was an invalid key. I currently have a mknod stuffed in the openvpn init...
More->Migrate All VMs appears to be a root-only operation (while individual VM migrations can be given to individual users). Is this a bug or a non-implemented feature? If unimplemented, is there a timeline for implementation?
I actually got just a bit further. using lxc-start -F, I was able to wedge an ifcfg-eth0 onto the container and enable sshd, so I can confirm that starting via the gui does start the container and all networking works, but I'm still not getting any console.
Scientific Linux release 7.1 (Nitrogen)I got a little further and messed with /usr/share/perl5/PVE/LXCSetup/Redhat.pm to allow for 7.1. I've got it starting and letting me log into the container with 'lxc-start -F --name 100', but I can't get anything going via the GUI (I can start it, the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.