Proxmox Connection error - Timeout

the_ma4rio

New Member
Oct 29, 2023
11
0
1
Hello, I'm really newbie here.
So I bought yesterday a Intel NUC NUC8BEH
I installed Promox 8.0.2 but meanwhile the installation I got the first problems the Monitor was only flickering... Then I restarted the NUC and installed 8.0.2. so everything was running well then I logged into the Webinterface and after about 10 minutes I get a timeout and can't reach the IP of the NUC anymore. Then I thought it must be the 8.0 version. So I installed the 7.4.3 of Proxmox. Without success here it also Get Offline after 10 minutes. I don't find anything similar in other forums so I don't know what to do.
I would be really happe over any suggestions but please be Patient because it's my fist NUC and first Proxmox Server. Thanks
 
The two most common issues are:
a) known problem with realtek driver. For example, https://www.reddit.com/r/Proxmox/comments/150stgh/proxmox_8_rtl8169_nic_dell_micro_formfactors_in/ . You can find more information by searching "proxmox realtek"
b) A duplicate IP on the network


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks for ur answer I have no realtec driver on my Board It's an Intel E1000E

Also I checked my Unifi Network for duplicated IPs and don't got one there....
 
You'll need to invest more time in data collection, reporting and troubleshooting. The top questions to answer:
- what is the IP of the PVE (ip a)
- what is the context of the "cat /etc/network/interface"
- what is the IP config of your workstation (windows: ipconfig)
- describe what exactly happens during the outage:
-- can you ping gw from the PVE console
-- is the interface up (ip a)
-- can you ping PVE IP from a workstation?
-- can you ping PVE IP from gw, if possible? (ip route)
-- what is the context of your ARP table on the workstation, ie on windows: arp -a|findstr pve.ip ?
-- are there any firewall/routing device in the middle?
- connect the PVE to workstation via direct cable, is the problem still happening?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You'll need to invest more time in data collection, reporting and troubleshooting. The top questions to answer:
- what is the IP of the PVE (ip a)
- what is the context of the "cat /etc/network/interface"
- what is the IP config of your workstation (windows: ipconfig)
- describe what exactly happens during the outage:
-- can you ping gw from the PVE console
-- is the interface up (ip a)
-- can you ping PVE IP from a workstation?
-- can you ping PVE IP from gw, if possible? (ip route)
-- what is the context of your ARP table on the workstation, ie on windows: arp -a|findstr pve.ip ?
-- are there any firewall/routing device in the middle?
- connect the PVE to workstation via direct cable, is the problem still happening?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
ip a :
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 1c:69:7a:0b:8a:80 brd ff:ff:ff:ff:ff:ff
altname enp0s31f6
3: wlp0s20f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 04:ea:56:86:40:2a brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1c:69:7a:0b:8a:80 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.87/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::1e69:7aff:fe0b:8a80/64 scope link
valid_lft forever preferred_lft forever
cat /etc/network/interface
cant show it because my machine said its empty......

yes i can ping it
Ping wird ausgeführt für 10.0.0.87 mit 32 Bytes Daten:
Antwort von 10.0.0.87: Bytes=32 Zeit<1ms TTL=64
Antwort von 10.0.0.87: Bytes=32 Zeit<1ms TTL=64
Antwort von 10.0.0.87: Bytes=32 Zeit<1ms TTL=64
Antwort von 10.0.0.87: Bytes=32 Zeit<1ms TTL=64

Ping-Statistik für 10.0.0.87:
Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0
(0% Verlust),
Ca. Zeitangaben in Millisek.:
Minimum = 0ms, Maximum = 0ms, Mittelwert = 0ms
PS C:\Users\mario\Downloads>


Ipconfig
Windows-IP-Konfiguration


Ethernet-Adapter Ethernet:

Verbindungsspezifisches DNS-Suffix: localdomain
IPv6-Adresse. . . . . . . . . . . : fd7d:7c45:7b46:6c66:f76e:a882:3e12:48d6
Temporäre IPv6-Adresse. . . . . . : fd7d:7c45:7b46:6c66:6d9c:7304:1a8c:762f
Verbindungslokale IPv6-Adresse . : fe80::c4d3:56cd:7f7b:d3f3%18
IPv4-Adresse . . . . . . . . . . : 10.0.0.30
Subnetzmaske . . . . . . . . . . : 255.255.255.0
Standardgateway . . . . . . . . . : 10.0.0.1

Drahtlos-LAN-Adapter WLAN:

Medienstatus. . . . . . . . . . . : Medium getrennt
Verbindungsspezifisches DNS-Suffix:

Drahtlos-LAN-Adapter LAN-Verbindung* 1:

Medienstatus. . . . . . . . . . . : Medium getrennt
Verbindungsspezifisches DNS-Suffix:

Drahtlos-LAN-Adapter LAN-Verbindung* 10:

Medienstatus. . . . . . . . . . . : Medium getrennt
Verbindungsspezifisches DNS-Suffix:





my Proxmox randomly shuts down his webinterface then i havent acccess on it and also if i plug a monitor in it there is no screen or anything i have to restart it and then it works for 10 minutes
 
but when it has that error i cant ping it with my workstation. i have to restart my NUC and then i can ping iut again
 
my Proxmox randomly shuts down his webinterface then i havent acccess on it and also if i plug a monitor in it there is no screen or anything i have to restart it and then it works for 10 minutes
Sounds like a hardware/bios/firmware problem unrelated to PVE. I recommend that you ensure that everything is up to date, install a vanilla Debian on it and monitor the behavior. If it continues to shutdown - your hardware is not compatible.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
That don't help me.... Is there any possibility to run an older version of Proxmox in this machine ? And how do I manage to get the old kernel ?
 
Oct 30 01:48:42 pve systemd[1]: Started Session 1 of user root.
Oct 30 01:49:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 30 01:49:00 pve systemd[1]: pvesr.service: Succeeded.
Oct 30 01:49:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 30 01:50:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 30 01:50:00 pve systemd[1]: pvesr.service: Succeeded.
Oct 30 01:50:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 30 01:51:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 30 01:51:00 pve systemd[1]: pvesr.service: Succeeded.
Oct 30 01:51:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 30 01:52:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 30 01:52:00 pve systemd[1]: pvesr.service: Succeeded.
Oct 30 01:52:00 pve systemd[1]: Started Proxmox VE replication runner.
Oct 30 01:52:45 pve pvedaemon[978]: <root@pam> starting task UPID:pve:0000090F:00008A89:653EFE5D:qmcreate:142:root@pam:
Oct 30 01:52:45 pve dmeventd[388]: No longer monitoring thin pool pve-data.
Oct 30 01:52:46 pve lvm[388]: Monitoring thin pool pve-data-tpool.
Oct 30 01:52:46 pve pvedaemon[978]: <root@pam> end task UPID:pve:0000090F:00008A89:653EFE5D:qmcreate:142:root@pam: OK
Oct 30 01:52:49 pve systemd[1]: session-1.scope: Succeeded.
Oct 30 01:52:49 pve pvedaemon[977]: <root@pam> end task UPID:pve:00000504:00002B80:653EFD69:vncshell::root@pam: OK
Oct 30 01:52:52 pve pvedaemon[977]: <root@pam> starting task UPID:pve:00000960:00008D6A:653EFE64:vncshell::root@pam:
Oct 30 01:52:52 pve pvedaemon[2400]: starting termproxy UPID:pve:00000960:00008D6A:653EFE64:vncshell::root@pam:
Oct 30 01:52:52 pve pvedaemon[978]: <root@pam> successful auth for user 'root@pam'
Oct 30 01:52:52 pve systemd[1]: Started Session 3 of user root.
Oct 30 01:53:00 pve systemd[1]: Starting Proxmox VE replication runner...
Oct 30 01:53:00 pve systemd[1]: pvesr.service: Succeeded.
Oct 30 01:53:00 pve systemd[1]: Started Proxmox VE replication runner.
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@$
Oct 30 01:56:14 pve dmeventd[399]: dmeventd ready for processing.
Oct 30 01:56:14 pve systemd-modules-load[369]: Inserted module 'ib_iser'
Oct 30 01:56:14 pve systemd-modules-load[369]: Inserted module 'vhost_net'
Oct 30 01:56:14 pve systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 30 01:56:14 pve lvm[399]: Monitoring thin pool pve-data-tpool.
Oct 30 01:56:14 pve systemd[1]: Started Flush Journal to Persistent Storage.
Oct 30 01:56:14 pve lvm[388]: 7 logical volume(s) in volume group "pve" monitored
Oct 30 01:56:14 pve systemd[1]: Started udev Kernel Device Manager.
Oct 30 01:56:14 pve systemd[1]: Started udev Coldplug all Devices.
Oct 30 01:56:14 pve systemd[1]: Starting udev Wait for Complete Device Initialization...
Oct 30 01:56:14 pve systemd[1]: Starting Helper to synchronize boot up for ifupdown...
Oct 30 01:56:14 pve systemd-udevd[445]: Using default interface naming scheme 'v240'.
Oct 30 01:56:14 pve systemd-udevd[445]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Oct 30 01:56:14 pve systemd-modules-load[369]: Inserted module 'zfs'
Oct 30 01:56:14 pve systemd[1]: Started Load Kernel Modules.
Oct 30 01:56:14 pve systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 30 01:56:14 pve systemd-udevd[438]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Oct 30 01:56:14 pve systemd[1]: Found device /dev/pve/swap.
Oct 30 01:56:14 pve systemd[1]: Found device CT1000MX500SSD1 2.
Oct 30 01:56:14 pve systemd[1]: Created slice system-systemd\x2dbacklight.slice.
Oct 30 01:56:14 pve systemd[1]: Starting Load/Save Screen Backlight Brightness of backlight:acpi_video0...
Oct 30 01:56:14 pve systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
Oct 30 01:56:14 pve systemd[1]: Starting LVM event activation on device 8:3...
Oct 30 01:56:14 pve systemd[1]: Activating swap /dev/pve/swap...
Oct 30 01:56:14 pve systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 30 01:56:14 pve systemd[1]: Reached target Local File Systems (Pre).
Oct 30 01:56:14 pve systemd[1]: Starting File System Check on /dev/disk/by-uuid/C760-2CF0...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!