Homelab : wandering error messages and cannot connect vm internet

mamio

New Member
Apr 16, 2021
5
0
1
69
Problem :
vm LXC (debian 9): vztmpl/debian-10-standard_10.7-1_amd64.tar.gz 220.36MB (tried also with vztmpl/debian-9.0-standard_9.7-1_amd64.tar.gz 188.00MB)
cannot run apt update
cannot download curl
I'm new with proxmox. I googled. I reinstall several times. « Mais rien n'y fait ! ».
Thanks for your help. Regards.

Background :

Home lab :
HPE micro gen 10 plus (with memory 64) dedicated to proxmox, another server dedicated to nextcloud and a nas. Switch TP-Link. EAP WIFI.
HPE, Nas, pc : 192.168.3.Y. No vlan by now.
Everything behind firewall opnsense (I don't see a « deny » in live view log).
No firewall in proxmox
HPE proxmox : 1 HD 1T (proxmox) ; 2HD 4T (ZFS mirror)
Everything is under a domainname, let say Xdomain (later I want it to connect from outside)
Proxmox Hypervisor is up to date and "sync" as well.
/etc/apt/sources.list is ok

Some error messages :
Message 1 :
TASK ERROR: command '/usr/bin/termproxy 5900 --path /vms/100 --perm VM.Console -- /usr/bin/dtach -A /var/run/dtach/vzctlconsole100 -r winch -z lxc-console -n 100 -e -1' failed: exit code 1

Message 2
Unit postfix.service has finished reloading its configuration -- The result is done. Apr 15 21:41:06 ct100 systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE Apr 15 21:41:06 ct100 systemd[1]: Failed to start Raise network interfaces. -- Subject: Unit networking.service has failed -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit networking.service has failed.

To be continued with more background (one moment please)
 
(continued)

Message 3 :

All packages are up to date. W: Failed to fetch http://ftp.debian.org/debian/dists/stretch/InRelease Temporary failure resolving 'ftp.debian.org' W: Failed to fetch http://ftp.debian.org/debian/dists/stretch-updates/InRelease Temporary failure resolving 'ftp.debian.org' W: Failed to fetch http://security.debian.org/dists/stretch/updates/InRelease Temporary failure resolving 'security.debian.org' W: Some index files failed to download. They have been ignored, or old ones used instead.

Server interface
auto lo iface lo inet loopback auto eno1 iface eno1 inet static address 192.168.3.51/24 gateway 192.168.3.254 iface eno2 inet manual iface eno3 inet manual iface eno4 inet manual auto vmbr0 iface vmbr0 inet static address 10.10.10.1/24 bridge-ports none bridge-stp off bridge-fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether b4:7a:f1:3d:8c:84 brd ff:ff:ff:ff:ff:ff inet 192.168.3.51/24 brd 192.168.3.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::b67a:f1ff:fe3d:8c84/64 scope link valid_lft forever preferred_lft forever 3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether b4:7a:f1:3d:8c:85 brd ff:ff:ff:ff:ff:ff 4: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether b4:7a:f1:3d:8c:86 brd ff:ff:ff:ff:ff:ff 5: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether b4:7a:f1:3d:8c:87 brd ff:ff:ff:ff:ff:ff 6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 0a:fd:a2:bc:fc:80 brd ff:ff:ff:ff:ff:ff inet 10.10.10.1/24 brd 10.10.10.255 scope global vmbr0 valid_lft forever preferred_lft forever inet6 fe80::78e3:e0ff:fe6d:93a5/64 scope link valid_lft forever preferred_lft forever 7: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000 link/ether fe:d4:29:84:d5:07 brd ff:ff:ff:ff:ff:ff link-netnsid 0 8: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 36:dc:00:01:2c:f8 brd ff:ff:ff:ff:ff:ff 9: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether 0a:fd:a2:bc:fc:80 brd ff:ff:ff:ff:ff:ff 10: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000 link/ether 36:dc:00:01:2c:f8 brd ff:ff:ff:ff:ff:ff 11: veth100i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000 link/ether fe:67:7f:63:3d:fc brd ff:ff:ff:ff:ff:ff link-netnsid 1

ip r
default via 192.168.3.254 dev eno1 onlink 10.10.10.0/24 dev vmbr0 proto kernel scope link src 10.10.10.1 192.168.3.0/24 dev eno1 proto kernel scope link src 192.168.3.51

Server Host :
127.0.0.1 localhost.localdomain localhost 192.168.3.51 pve.Xdomain pve # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts

Hostname:
pve

Server resolv.conf
search Xdomain nameserver 8.8.8.8

VM interfaces
iface lo inet loopback auto eth0 iface eth0 inet static address 10.10.10.1 netmask 255.255.255.0 gateway 10.10.10.0 dns-nameservers 8.8.8.8

VM hosts
127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters # --- BEGIN PVE --- 10.10.10.1 ct100.Xdomain ct100 # --- END PVE ---

VM hostname
ct100

VM resolv.conf
BEGIN PVE --- search Xdomain nameserver 8.8.8.8 # --- END PVE ---
 
Last edited:
(continued)

pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-9
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.1-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-1
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
 
Last edited:
(continued)
systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2021-04-16 08:25:56 CEST; 1h 49min ago
Process: 1438 ExecStart=/usr/bin/pvestatd start (code=exited, status=0/SUCCESS)
Main PID: 1443 (pvestatd)
Tasks: 1 (limit: 4915)
Memory: 112.6M
CGroup: /system.slice/pvestatd.service
└─1443 pvestatd

Apr 16 08:25:55 pve systemd[1]: Starting PVE Status Daemon...
Apr 16 08:25:56 pve pvestatd[1443]: starting server
Apr 16 08:25:56 pve systemd[1]: Started PVE Status Daemon.
Apr 16 08:53:16 pve pvestatd[1443]: modified cpu set for lxc/101: 0
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!