Search results

  1. K

    ELevating LXC running Centos 7.9 to Alma 8

    I have concluded this just cannot be done on LXC containers. You should just plan to build new containers with whatever Linux you want to migrate to, (AlmaLinux and Rocky are solid options if you want to stay near RHEL) And then just migrate/move your services by hand. You can make CentOS 7...
  2. K

    ELevating LXC running Centos 7.9 to Alma 8

    Resolved this? Also running into this.
  3. K

    SMART error mails from node(s), can't find issue outside of syslog.

    For anybody that stumbles onto this topic. They still do this, and according IDRAC lights out management my arrays are still healthy. I've elected to just ignore this. Haven't found the culprit.
  4. K

    [SOLVED] Proxmox Won't Boot with Latest Kernel 5.13.19-2-pve

    I can confirm I can boot with 5.15.5-1pve. (From metapackage as per reccomendation) It also solves shutdown/reboot issue where the system hangs after shutdown procedure. @DavidKahl This is the way.
  5. K

    Some VMs suddenly freeze

    Just want to chime in on the system not booting after updates, boot issue could possibly be related to an issue beeing discussed in this topic. I've noticed OP also has AMD hardware and a good couple of people (myself included) seem to experience boot issues with the latest production kernel...
  6. K

    [SOLVED] Proxmox Won't Boot with Latest Kernel 5.13.19-2-pve

    Just adding info, i've been able to extract the following from kern.log on my box: Dec 6 20:24:45 arcturus kernel: [ 20.359574] amdgpu 0000:00:01.0: amdgpu: amdgpu_device_ip_init failed Dec 6 20:24:45 arcturus kernel: [ 20.359580] amdgpu 0000:00:01.0: amdgpu: Fatal error during GPU init...
  7. K

    [SOLVED] Proxmox Won't Boot with Latest Kernel 5.13.19-2-pve

    My tip is entirely seperate from the path's outlined earlier in this topic. If you run into this issue and you have not done anything yet, only my suggested GRUB config change is enough to have a bootable scenario again. Deleting the offending kernel and/or excluding it from updates is not...
  8. K

    [SOLVED] Proxmox Won't Boot with Latest Kernel 5.13.19-2-pve

    If you've already removed the problematic kernel and you auto-boot correctly now there's not real value in doing this i think. But if this is the latest kernel you have installed then 1>0 would be correct, assuming 1 (beeing the second menu option in the first menu) actually opens the advanced...
  9. K

    [SOLVED] Proxmox Won't Boot with Latest Kernel 5.13.19-2-pve

    Running into this issue as well. Specifically with the -2 kernel. -1 Boots just fine. Also put a Gen 10 Hp Proliant Microserver with AMD Opteron Dual Core on the stack of affected machines in the same veign. This is my home box with community repo. This also a thing on enterprise? Can't...
  10. K

    PVE-Firewall enable in cluster.

    I somehow missed this, thanks!. Might be worth referencing this in the manual I linked because that's the first thing Google feeds you if you look for this.
  11. K

    PVE-Firewall enable in cluster.

    I'm looking at working on my cluster security somewhat and for that end want to utilize the pve firewall. Looking through the instructions here i read that if I want to administer it remotely I need to add exceptions for it in order not to lose access as it claims only 22 and 8006 from it's...
  12. K

    SMART error mails from node(s), can't find issue outside of syslog.

    Dove into SMART codes a little more. https://en.wikipedia.org/wiki/S.M.A.R.T. where 0x04 referes Start/Stop Count. Using smartctl -a -d megaraid,<disk#> /dev/sda I can get some individual disk data. On both servers with disk 0 they log: (other drives report Health Status OK) However...
  13. K

    SMART error mails from node(s), can't find issue outside of syslog.

    We have a 9 node cluster. Now I have two nodes that start sending me SMART mails. One of which started doing this this morning, after I installed the latest updates and rebooted it yesterday. The other one started logging this about a month ago. Both of these nodes are PowerEdge R620 machines...
  14. K

    [SOLVED] [PX6] Adding Node to cluster failed

    We've chosen to re-install this node as i'm running out of time and patience to further troubleshoot this. Gave it a new name and new IP. Removed old node from cluster. Added this new-new node, also fully updated, to the same existing "not quite yet updated" node in our cluster, however now...
  15. K

    [SOLVED] [PX6] Adding Node to cluster failed

    I've found out latency may cause a problem. Even though I haven't seperated corosync traffic from the rest, cluster has it's own switch for interconnectivity and pings between them are reliably below 0.260 ms. Furthermore, pveversion -v output of working node vs new node: proxmox-ve: 6.2-1...
  16. K

    [SOLVED] [PX6] Adding Node to cluster failed

    I'd like to add this is a standard cluster. I haven't configured any HA, and the software is used as it installs from the ISO without any real customisations.
  17. K

    [SOLVED] [PX6] Adding Node to cluster failed

    I tried adding a node to our (existing) cluster of currently four machines. We tried to do this through GUI. GUI on new node stopped responding after it was restarting pve-cluster...something. (didn't grab a screenshot) GUI didn't come back. Server is still reachable over SSH. Node has been...
  18. K

    [SOLVED] Reboot server and get on lxc Server refused to allocate pty

    Commenting to point out that fstab indeed contains a devpts rule if you migrate a centos 6 openvz container to proxmox 5.3. Having this rule was no problem untill recently but commenting this line out works seems to aid in a solution.
  19. K

    Centos 6 SSH "Server refused to allocate pty"

    Sometimes you gotta say stuff alound to think of other things. Tried directly searching this forum. Found this in another recent topic here: https://forum.proxmox.com/threads/reboot-server-and-get-on-lxc-server-refused-to-allocate-pty.50407/#post-234530 Commenting out this devpts rule in...
  20. K

    Centos 6 SSH "Server refused to allocate pty"

    Some recent-ish Proxmox 5 updates seems to have caused an issue on my end with Centos 6 containers. I have a couple of Centos 6 containers still that have been migrated from Proxmox 3.5 to 5. (so openvz to lxc) These used to work fine, but in a recent update round SSH "became broken", getting an...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!