Search results

  1. 3

    Container Density

    These are nodes that sit idle 99% of the time. But when one goes active, usually dozens will go at the same time, on the same nodes. Typical vm/ct is 3 gig ram, 2 cores, 60 gigs diskspace, running Ubuntu server 18 currently. All of the vm's in question are identical in size and purpose...
  2. 3

    Container Density

    I agree that AMD are currently the best, but I was asked to provide a 2 and 5 year plan for future expansion paths. This includes the combining of current virtualization platforms of Nutanix, Vmware and legacy Xen Server from Citrix. Would it be possible to contact anyone that is running a...
  3. 3

    Container Density

    What metric's do I look at to increase density. We are after the goal of 5-7 watts per container. with 120 CT's running right now we are at 18.33 watts per CT. Our largest expense is electricity so I am open to idea's. Currently the whole system slows down and clients complain if we go over 70...
  4. 3

    Bult delete containers?

    is there a simple way to delete multiple containers? I have hundreds to remove.
  5. 3

    Can't shell into a node via GUI

    Just a quick thanks for posting the solution! Juan C thumbs up! :)
  6. 3

    [SOLVED] Cant Migrate Between Nodes

    Can't believe I missed adding Hulk in the nodes. That worked like a charm. Thank you!
  7. 3

    Upgrade error

    Sorry looks like I missed removing the /etc/apt/sources.list.d/pve-no-subscription.list off of this server. It functions now.
  8. 3

    Upgrade error

    I have removed the free channel prior to posting here. Proxmox is NOT seeing the change. I have 2 licensed servers with exactly the same sources.list file. One server updates fine, the other does not.
  9. 3

    Upgrade error

    starting apt-get update Get:1 http://security.debian.org buster/updates InRelease [65.4 kB] Hit:2 http://ftp.us.debian.org/debian buster InRelease Get:3 http://ftp.us.debian.org/debian buster-updates InRelease [49.3 kB] Hit:4 https://enterprise.proxmox.com/debian/pve buster InRelease Ign:5...
  10. 3

    Upgrade error

    I get the following when I try to run the upgrade. I do have a subscription and have removed the "no-subscription" from the confige but it still is looking for it. Get:97 https://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 zfsutils-linux amd64 0.8.2-pve2 [355 kB] Fetched...
  11. 3

    Purchased wrong license

    Thank you and Birgit Ressl for the quick response getting the mess I made sorted out!
  12. 3

    [SOLVED] Cant Migrate Between Nodes

    Sorry for the delay in responding. root@Marvel:~# pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve) pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7) pve-kernel-5.0: 6.0-6 pve-kernel-helper: 6.0-6 pve-kernel-5.0.18-1-pve: 5.0.18-1 pve-kernel-5.0.15-1-pve: 5.0.15-1 ceph-fuse...
  13. 3

    Purchased wrong license

    Yea I did it, I purchased what looks like 4 cpu's on a single server. Needed two servers, each with 2 cpu's. Will they fix this for me?
  14. 3

    Too many open files error.

    ok, this will be the next down period in about 3 weeks.
  15. 3

    Too many open files error.

    I don't have a subscription yet. I will update it and see if this corrects it.
  16. 3

    Too many open files error.

    root@Marvel:/# pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve) pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7) pve-kernel-5.0: 6.0-6 pve-kernel-helper: 6.0-6 pve-kernel-5.0.18-1-pve: 5.0.18-1 pve-kernel-5.0.15-1-pve: 5.0.15-1 ceph-fuse: 12.2.11+dfsg1-2.1 corosync: 3.0.2-pve2...
  17. 3

    Too many open files error.

    Ran into this on a new install of Proxmox 6. Is there a plan to incorporate this fix? Error: Failed to allocate directory watch: Too many open files" in containers A patch is called out in bug report 1042. https://bugzilla.proxmox.com/show_bug.cgi?id=1042 Server has 36 CT's and 1 VM running...
  18. 3

    [SOLVED] Cant Migrate Between Nodes

    There is something I am not understanding about proxmox and storage, I am sure I have this setup wrong. Just not sure how to redo this. I have 2 nodes, each with a seperate ZFS storage configured for vm's and ct's. Local storage is 2-300gb sas drives in raid one on dedicated controller, houses...
  19. 3

    ZFS slow write performance

    Not sure of a solution, but I am curious if that is enough ram. My servers with ZFS will take 40 gigs of ram to run with no CT's or VM's running.