Search results

  1. G

    PVE 7 upgrade issue with ceph client (external) and openvswitch

    A note to folks who might find this ticket - this ended up being an MTU issue that seems to have been tickled by a more strict behavior on the upgraded software. Prior to the 7 upgrade, we were setting the underlying bond phy interfaces to 9k via this sort of syntax under the bond...
  2. G

    PVE 7 upgrade issue with ceph client (external) and openvswitch

    We've hit a pretty strange issue after doing a 6to7 upgrade on a system using openvswitch and a ceph rbd pool defined for an external ceph cluster. Upon boot, for a brief second or two, the ceph storage is available but as soon as VMs start bringing up their vswitch ports, the ceph storage is...
  3. G

    Greater than 16GB RAM on VM with SR-IOV enabled

    It really does seem to be a timeout thing. Is there a way to manually change the timeout for the GUI or qm startup? real 1m50.741s user 0m0.044s sys 0m0.004s
  4. G

    Greater than 16GB RAM on VM with SR-IOV enabled

    I have a proxmox node with a Mellanox Connectx-3 card set up for SR-IOV and VIP interfaces. Works fine for most cases, I can pass a VIP each to several VMs but once I configure a VM for greater than 16GB, the VM times out when trying to start via the gui or qm commant. Curiously, it does...
  5. G

    can't restore lxc backup to lvm

    took a backup of a container running on a dir storage type and tried to restore to an lvm type (all via gui): Use of uninitialized value $disksize in division (/) at /usr/share/perl5/PVE/API2/LXC.pm line 265. TASK ERROR: unable to detect disk size - please specify rootfs (size)
  6. G

    tag must be greater than 1 (ve 4)

    Updated from the last beta to the stable ve 4 and got the following on trying to start any of my containers: root@eduardo:~# pct start 100 vm 100 - unable to parse value of 'net0' - format error at /usr/share/perl5/PVE/JSONSchema.pm line 529 tag: value must have a minimum value of 2 format...
  7. G

    ve 4b2 backup issue

    Just upgraded from ve4b1 to ve4b2. Had scheduled backups already set up but a manual backup test. It's got a mountpoint defined, as you'll see, and gave me this: INFO: starting new backup job: vzdump 100 --node eduardo --remove 0 --storage datastore --mode snapshot --compress gzip INFO...
  8. G

    pty issue with ve 4 containers (converted from 3)

    That does seem to have done the trick. Not sure how that was absent. I only added the mount and pts entries, the rest were generated by the pct restore process.
  9. G

    pty issue with ve 4 containers (converted from 3)

    lxc.arch = i386 lxc.cgroup.cpu.cfs_period_us = 100000 lxc.cgroup.cpu.cfs_quota_us = 100000 lxc.cgroup.cpu.shares = 1024 lxc.cgroup.memory.limit_in_bytes = 4294967296 lxc.cgroup.memory.memsw.limit_in_bytes = 5368709120 lxc.rootfs = loop:/storage/datastore/images/101/vm-101-rootfs.raw lxc.utsname...
  10. G

    pty issue with ve 4 containers (converted from 3)

    Setting it to anything above '0' allows me to log in via ssh. even 1 supports multiple ssh sessions. ommitting the setting produces the same behavior as setting it to 0. If you want me to test any specific config, please let me know, happy to give it a whirl.
  11. G

    Proxmox 4.0 Beta ( Unable to mount multiple nfs host paths in LXC container )

    Been having the same issue, I *think* I saw it was fixed in more recent git versions, but I can't find that again for the life of me.
  12. G

    pty issue with ve 4 containers (converted from 3)

    Yep, did several tests back and forth. Without the value in the config, I get the ssh issue. I still have yet to get a console via GUI. It's weird, I did a migration the other day without any issues (just tar'd up the vz containers and restored them on the new install). But this migration...
  13. G

    pty issue with ve 4 containers (converted from 3)

    lxc.pts=1 in my config fixed it edit: not entirely, the console from the gui still doesn't get anything.
  14. G

    pty issue with ve 4 containers (converted from 3)

    Upgraded a machine to ve4. Previous to the upgrade, I took backups of all openvz containers and am trying to restore in ve 4 w/ pct restore. Most went fine, but a few give me this on attempting to ssh to the container: root@usenet's password: PTY allocation request failed on channel 0 Linux...
  15. G

    [pve-kernel#701] nvidiafb boot issue w/ ve4

    I've got an nvidia card on my host. I recently reinstalled with ve4 and am getting the following on boot: [screen goes white on green from white on black]: [ 58.599324] nvidiafb: unable to setup MTRR It just locks up here. Removing the card and booting works fine.
  16. G

    tun devices in ve 4 (lxc)

    What is the 'proxmox way' of adding a tun device to an lxc container on boot? I tried putting "lxc.cgroup.devices.allow = c 10:200 rwm" in the config for the container, but the gui was pretty upset about that, said it was an invalid key. I currently have a mknod stuffed in the openvpn init...
  17. G

    enable numa post install

    If I said no to numa in the installer, is there a way to torn it on post-install?
  18. G

    Migrate all VMs as a non-root GUI user

    More->Migrate All VMs appears to be a root-only operation (while individual VM migrations can be given to individual users). Is this a bug or a non-implemented feature? If unimplemented, is there a timeline for implementation?
  19. G

    creating a lxc container from scratch

    I actually got just a bit further. using lxc-start -F, I was able to wedge an ifcfg-eth0 onto the container and enable sshd, so I can confirm that starting via the gui does start the container and all networking works, but I'm still not getting any console.
  20. G

    creating a lxc container from scratch

    Scientific Linux release 7.1 (Nitrogen)I got a little further and messed with /usr/share/perl5/PVE/LXCSetup/Redhat.pm to allow for 7.1. I've got it starting and letting me log into the container with 'lxc-start -F --name 100', but I can't get anything going via the GUI (I can start it, the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!