I was doing some housekeeping while moving from V6 to V7 (Will later go from V7 to V8). In a 3-node cluster, I found one of my nodes includes "bullseye-updates". The other two nodes, don't. I imagine I made a mistake at some point. Should these nodes all have bullseye-updates, or should none of...
The logs would be completely full with those messages. After a reboot, the logs would look basically fine. I was able to capture the transition, after a few tries.
It looks like I have a drive going bad:
Mar 25 18:03:39 dmo-pve2 kernel: sd 0:0:0:0: [sda] tag#26 FAILED Result: hostbyte=DID_OK...
Thank you for the suggestion. When I run journalctl after the problem occurs, I basically see this on repeat:
Mar 25 13:10:48 dmo-pve2 rsyslogd[652]: action 'action-1-builtin:omfile' (module 'builtin:omfile') message lost, could not be processed. Check
Mar 25 13:10:48 dmo-pve2 rsyslogd[652]...
I have a 3-Node cluster running older Proxmox 6.4-13 (I'm still running several old containers that don't work with 7 out of the box).
One of the nodes (Node 2) has partially stopped responding twice in the past week. When it stops responding, I can connect to the cluster gui from either Node...
Knowing that to run an NTP server, it also needs to sync as an NTP client... recognizing that the Containers are actually synced to the host, I got to wondering:
Would it be better to run a CT as an NTP server as it automatically syncs to the host which is already running chrony,
or
Would it be...
After a bunch of checking - on our first node a reboot seemed to resolve everything. I recreated the issue on a second node, and your suggestion worked successfully. Thank you, that was a rough time trying to get back up.
I planned to run an NTP server for my network as a container in Proxmox. However, I didn't consider how the CT uses the host clock.
After setting up a Container with Ubuntu and Chrony and checking the status, I receive this response:
@ntp1:~# systemctl status chronyd
* chrony.service - chrony...
I appreciate your position: "why punish those who can solve this problem". If we're using that rhetoric, I suppose I would respond with: "why punish the majority of users and systems that have not been specifically customized to work with older containers?" I feel like Proxmox does a really good...
I just went through fixing this issue with existing 16.04 containers. I tried downloading the template to see if it had this problem resolved internally since it showed as available.
I think it could be very useful for the list to differ between versions. Having these templates available for...
I'm curious, why is the Ubuntu 16.04 Container Template still downloadable from within PVE 7.1?
I tested it out just to see if it had built-in adjustments to run cgroupv2 and be compatible with PVE 7, but it does not.
Can the available template list be adjusted to only be ones that are...
@oguz thank you very much. That successfully killed the process, and I was able to change the autoboot parameter as suggested. I rebooted the hypervisor and removed the offending containers.
Would you advise which process you are suggesting to kill before modifying the autoboot parameter?
root 40241 0.0 0.0 4548 2168 ? Ss Jan16 0:29 /usr/bin/lxc-start -F -n 102
root 40433 0.0 1.0 363732 123364 ? Ss Jan16 0:27 task...
root@sfml-pve1:~# pct stop 102 --skiplock
trying to acquire lock...
can't lock file '/run/lock/lxc/pve-config-102.lock' - got timeout
It didn't like that; I received the same response.
I made a mistake on one of my individual nodes. I upgraded to PVE 7.1 while I still had an auto-starting old container on Ubuntu 16.04 which does not work with cgroupv2. That container now shows running, but I cannot stop it.
I rebuilt the container as a VM manually so I don't actually need...
Can you point out a method for monitoring utilization of cores within Proxmox? If that is not possible, am I just looking for some SNMP OIDs to connect to our network monitoring, or something else entirely?
Thank you, this is a very helpful resource. I have a better understanding of my previous misconceptions.
I thought Server Load was only related to CPU usage, and was load relative to 1 being maxed out. I now understand that the CPU usage components of Server Load is effectively per core (i.e...
I am looking at the graphs that show "CPU usage" and "Server load". It was my understanding that Server Load represents how many things are waiting to be processed. Conceptually, I picture as long as CPU usage is less than 100%, Server Load would be under 1. This is clearly not the case, though...
I have attempted manually and through the GUI. I think I need an OVSPort definition for VLAN, but I cannot find documentation explaining the configuration file.
The best implementation method I can find (aside from typical dot1q-tunneling which is not yet natively supported) is to create a...
This is exactly the behavior I'm currently looking to achieve. I am implementing the suggested change in our test system, but realized I'm hesitant to use the configuration before it is natively implemented as the transition back may be a challenge.
I haven't seen any updates on actually...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.