is the amd_pstate driver now working? Or when will it be integrated? In kernel 15.17?
I tried to test it with current proxmox and kernel 5.15 and enabling it via
modprobe amd_pstate
but it didn't work and I couldn't remove the acpi-cpufreq module via modprobe -r as it is builtin...
I just want...
I have the same issue with the new kernel... 5.13.19-2-pve where I cannot boot anymore :(
I also use grub and have an AMD system.
How can I remove the new kernel without removing the pve packages as David mentioned?
OK I got it:
check the apt logs:
Start-Date: 2021-12-04 21:23:24
Commandline...
Thanks for your reply. I had contact with my VPS provider netcup and they changed the hosting machine to another/migrated my VPS to another host but again with Epyc I think. They said that they cannot do more and don’t Support Proxmox. The reboots got less but I still have them occasionally...
:( now I get sudden reboots again...anyone of the admin staff can help here please?
-Where does the sudden -- Reboot -- in the logs come from? I don't see it myself in the syslog only in proxmox UI.
-Is this reboot coming from proxmox?
-Where can I find further information on what triggered...
well now I have my system running since 4 days without reboot. Strange though. I didn’t change anything but did a complete shutdown of the VPS and restart.
Interesting Thanks! Well my server is a VPS server and lscpi says:
proxmox:~$ lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE...
Hi,
the last two days I had unexplainable reboots of my proxmox server. I didn't find anything in the logs...
reboot startet around 05:36 suddenly.
It is a netcup VPS but in the logs there and in the cloud cockpit there is no reboot...And server is listed up there for > 2 days when I started...
it is strange though:
host node on IP1 has ssh on port 22 running --> works
If I reboot container then the ssh service is disabled in the container but I still cann connect to it via the 22 port but with the IP2 of the container...
is this because of nesting?
I want for each container an own ssh...
nope still with port 22 the service is not running...
what I get in journalctl is strange though after restarting container:
...
Sep 13 18:48:12 temp systemd[1]: Starting System Logging Service...
Sep 13 18:48:12 temp systemd[1]: systemd-logind.service: Attaching egress BPF program to cgroup...
Yes that is what I did. It doesn’t start after reboot. I have changed the port from 22 to another. But it only works when manually starting seh after container reboot via systrmctl
Hi guys,
today I installed the latest 7.0 Proxmox image and then installed a container with the debian 11 bullseye template image. When starting the container the ssh server is enabled but not started. I can manually start the service but upon reboot it is not started. I don't get it! Container...
Kenne ich ;)
PS: Hosting provider is netcup...don't know if they only allow network traffic through the defined physical mac address
Thanks for updating the doc with exact steps!!:
https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0#Check_Linux_Network_Bridge_MAC
omg I think I got it...
added hwaddress under the vmbr0 bridge with my physical mac address...
strangely it works with ifupdown 1 and 2 was not installed
is that all or do I have todo some other additional config?
After upgrading from 6.4 with opt in kernel 5.11 to 7 and rebooting I cannot reach/ping/access webgui and ssh. Also I cannot ping inside the machine via VNC.
ifupdown2 wasn't installed and the network names were not renamed.
my vmbr0 has the external IP set.
I also tried disabling the firewall...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.