LXC debian kein login

PsyGanja

New Member
Jun 20, 2022
15
0
1
Hallo Forum
ich habe einen debian LXC Container erstellt, starte ihn und wechsel auf die Konsole, dort erscheint aber kein Login. Ein Zugriff über Linux Terminal hingegen klappt aber ohne Probleme, es dauert nur ewig.
Erstelle eich einen Ubuntu LXC Container, habe ich das Problem nicht.
Erstelle ich aber über ein Script von Hier einen debian Container habe ich in sek. einen Login Prompt.
Welche Infos kann ich euch zur Verfügung stellen um dem Problem auf den Grund zu gehen?
Hier wurde das Problem behoben, ich wüsste nur nicht wo ich die Einstellung mache.
Ich hoffe ihr könnt mir helfen.
 
Hallo,
die Konfiguration der unterschiedlichen Container würde vermutlich mal helfen. Sollte untern /etc/pve/lxc/$lxc-id.conf liegen wobei hier $lxc-id mit den IDs der Container zu ersetzen ist.
 
Code:
cat /etc/pve/lxc/101.conf
arch: amd64
cores: 2
features: nesting=1
hostname: debian
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.188.1,hwaddr=4A:B2:1C:5F:17:D9,ip=192.168.188.86/24,ip6=dhcp,type=veth
ostype: debian
rootfs: local-lvm:vm-101-disk-1,size=8G
swap: 512
root@proxmox:~# cat /etc/pve/lxc/201.conf
arch: amd64
cores: 2
features: nesting=1
hostname: debian11helper
memory: 2048
net0: name=eth0,bridge=vmbr0,hwaddr=72:1E:73:D6:1F:50,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-201-disk-0,size=8G
swap: 512

Container 101 funktioniert nicht. Bis auf die dhcp Geschichte ist doch alles gleich. Natürlich habe ich auch schon mehrere Einstellungen versucht.
 
Bildschirmfoto vom 2022-06-27 18-24-46.png

So sieht es auf dem Desktop aus. Nach ca 3min komme ich dann per Terminal auf den Container.
Bildschirmfoto vom 2022-06-27 18-27-27.png
 
Last edited:
Habe genau das gleiche Problem mit einem Debian 11-LXC seit einiger Zeit.
Habe den Hänger in beiden Journals mal markiert.
Scheint mit dem Netzwerk, genauer gesagt: systemd-networkd-wait-online.service zusammenzuhängen?!

Bash:
proxmox-ve: 7.2-1 (running kernel: 5.15.35-2-pve)
pve-manager: 7.2-5 (running version: 7.2-5/12f1e639)
pve-kernel-5.15: 7.2-5
pve-kernel-helper: 7.2-5
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.15.35-2-pve: 5.15.35-5
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 10.1-3~bpo11+1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-5
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.3-1
proxmox-backup-file-restore: 2.2.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-10
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1
Bash:
arch: amd64
cores: 8
features: mount=cifs,nesting=1
hostname: Jellyfin
memory: 32768
net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=EE:E7:04:D7:D3:D6,ip=192.168.1.15/24,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-102-disk-0,mountoptions=noatime,size=32G
swap: 2048
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 506:* rwm
lxc.cgroup2.devices.allow: c 226:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
Bash:
auto lo
iface lo inet loopback

iface enp8s0 inet manual

iface enp9s0 inet manual

iface enp6s0f0 inet manual
        post-up /sbin/ethtool -C enp6s0f0 rx-usecs 0 tx-usecs 0

iface enp6s0f1 inet manual
        post-up /sbin/ethtool -C enp6s0f1 rx-usecs 0 tx-usecs 0

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.5/24
        gateway 192.168.1.1
        bridge-ports enp6s0f0
        bridge-stp off
        bridge-fd 0
#10Gb - LAN

auto vmbr1
iface vmbr1 inet manual
        bridge-ports enp8s0
        bridge-stp off
        bridge-fd 0
#1Gb - WAN

auto vmbr2
iface vmbr2 inet manual
        bridge-ports enp9s0
        bridge-stp off
        bridge-fd 0
#2.5Gb - WAN2

auto vmbr3
iface vmbr3 inet manual
        bridge-ports enp6s0f1
        bridge-stp off
        bridge-fd 0
#10Gb
Bash:
Jun 28 08:08:55 pve systemd[1]: Started PVE LXC Container: 102.
Jun 28 08:08:56 pve audit[3751885]: AVC apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-102_</var/lib/lxc>" pid=3751885 comm="apparmor_parser"
Jun 28 08:08:56 pve kernel: audit: type=1400 audit(1656396536.236:32): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-102_</var/lib/lxc>" pid=3751885 comm="apparmor_parser"
Jun 28 08:08:56 pve systemd-udevd[3751889]: Using default interface naming scheme 'v250'.
Jun 28 08:08:56 pve kernel: vmbr0: port 5(veth102i0) entered blocking state
Jun 28 08:08:56 pve kernel: vmbr0: port 5(veth102i0) entered disabled state
Jun 28 08:08:56 pve kernel: device veth102i0 entered promiscuous mode
Jun 28 08:08:56 pve kernel: eth0: renamed from vethTG8CW9
Jun 28 08:08:56 pve pvedaemon[302155]: <root@pam> end task UPID:pve:00393FBE:0334534F:62BA9AF7:vzstart:102:root@pam: OK
Jun 28 08:08:56 pve pvedaemon[302154]: <root@pam> starting task UPID:pve:00394051:033453AF:62BA9AF8:vncproxy:102:root@pam:
Jun 28 08:08:56 pve pvedaemon[3752017]: starting lxc termproxy UPID:pve:00394051:033453AF:62BA9AF8:vncproxy:102:root@pam:
Jun 28 08:08:56 pve kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jun 28 08:08:56 pve kernel: vmbr0: port 5(veth102i0) entered blocking state
Jun 28 08:08:56 pve kernel: vmbr0: port 5(veth102i0) entered forwarding state
Jun 28 08:08:56 pve pvedaemon[302155]: <root@pam> successful auth for user 'root@pam'

----------========== HANG ==========----------

Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\A
Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\B
Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\C
Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\D
Jun 28 08:10:56 pve kernel: FS-Cache: Duplicate cookie detected
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie c=00000033 [p=00000002 fl=222 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie d=0000000077b68ee6{CIFS.server} n=00000000f5ecdd48
Jun 28 08:10:56 pve kernel: FS-Cache: O-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie c=00000034 [p=00000002 fl=2 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie d=0000000077b68ee6{CIFS.server} n=0000000024b2f4d1
Jun 28 08:10:56 pve kernel: FS-Cache: N-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\E
Jun 28 08:10:56 pve kernel: FS-Cache: Duplicate cookie detected
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie c=00000033 [p=00000002 fl=222 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie d=0000000077b68ee6{CIFS.server} n=00000000f5ecdd48
Jun 28 08:10:56 pve kernel: FS-Cache: O-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie c=00000035 [p=00000002 fl=2 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie d=0000000077b68ee6{CIFS.server} n=00000000166a8b63
Jun 28 08:10:56 pve kernel: FS-Cache: N-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\F
Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\G
Jun 28 08:10:56 pve kernel: FS-Cache: Duplicate cookie detected
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie c=00000033 [p=00000002 fl=222 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie d=0000000077b68ee6{CIFS.server} n=00000000f5ecdd48
Jun 28 08:10:56 pve kernel: FS-Cache: O-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie c=00000036 [p=00000002 fl=2 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie d=0000000077b68ee6{CIFS.server} n=00000000ed7ee1b9
Jun 28 08:10:56 pve kernel: FS-Cache: N-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: Duplicate cookie detected
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie c=00000033 [p=00000002 fl=222 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie d=0000000077b68ee6{CIFS.server} n=00000000f5ecdd48
Jun 28 08:10:56 pve kernel: FS-Cache: O-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie c=00000037 [p=00000002 fl=2 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie d=0000000077b68ee6{CIFS.server} n=0000000071cde492
Jun 28 08:10:56 pve kernel: FS-Cache: N-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: Duplicate cookie detected
Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\H
Jun 28 08:10:56 pve kernel: CIFS: Attempting to mount \\192.168.1.10\I
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie c=00000033 [p=00000002 fl=222 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie d=0000000077b68ee6{CIFS.server} n=00000000f5ecdd48
Jun 28 08:10:56 pve kernel: FS-Cache: O-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie c=00000038 [p=00000002 fl=2 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie d=0000000077b68ee6{CIFS.server} n=000000005c244877
Jun 28 08:10:56 pve kernel: FS-Cache: N-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: Duplicate cookie detected
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie c=00000033 [p=00000002 fl=222 nc=3 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: O-cookie d=0000000077b68ee6{CIFS.server} n=00000000f5ecdd48
Jun 28 08:10:56 pve kernel: FS-Cache: O-key=[8] '020001bdc0a8010a'
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie c=00000039 [p=00000002 fl=2 nc=0 na=1]
Jun 28 08:10:56 pve kernel: FS-Cache: N-cookie d=0000000077b68ee6{CIFS.server} n=0000000030810c7a
Jun 28 08:10:56 pve kernel: FS-Cache: N-key=[8] '020001bdc0a8010a'
 
Bash:
Jun 28 08:08:56 Jellyfin systemd-journald[51]: Journal started
Jun 28 08:08:56 Jellyfin systemd-journald[51]: Runtime Journal (/run/log/journal/5e914b583e5745e681d682abbff23a71) is 8.0M, max 2.5G, 2.5G free.
Jun 28 08:08:56 Jellyfin systemd[1]: Starting Flush Journal to Persistent Storage...
Jun 28 08:08:56 Jellyfin systemd[1]: Finished Create Static Device Nodes in /dev.
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target Preparation for Local File Systems.
Jun 28 08:08:56 Jellyfin systemd[1]: Mounting /mnt/transcodes...
Jun 28 08:08:56 Jellyfin systemd[1]: Rule-based Manager for Device Events and Files was skipped because of a failed condition check (ConditionPathIsReadWrite=/sys).
Jun 28 08:08:56 Jellyfin systemd[1]: Starting Network Configuration...
Jun 28 08:08:56 Jellyfin systemd-journald[51]: Time spent on flushing to /var/log/journal/5e914b583e5745e681d682abbff23a71 is 6.395ms for 8 entries.
Jun 28 08:08:56 Jellyfin systemd-journald[51]: System Journal (/var/log/journal/5e914b583e5745e681d682abbff23a71) is 112.4M, max 3.1G, 3.0G free.
Jun 28 08:08:56 Jellyfin systemd[1]: Mounted /mnt/transcodes.
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target Local File Systems.
Jun 28 08:08:56 Jellyfin systemd[1]: Starting Raise network interfaces...
Jun 28 08:08:56 Jellyfin systemd[1]: Set Up Additional Binary Formats was skipped because of a failed condition check (ConditionPathIsReadWrite=/proc/sys).
Jun 28 08:08:56 Jellyfin systemd[1]: Store a System Token in an EFI Variable was skipped because of a failed condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jun 28 08:08:56 Jellyfin systemd[1]: Commit a transient machine-id on disk was skipped because of a failed condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jun 28 08:08:56 Jellyfin systemd[1]: Finished Flush Journal to Persistent Storage.
Jun 28 08:08:56 Jellyfin systemd[1]: Starting Create Volatile Files and Directories...
Jun 28 08:08:56 Jellyfin systemd-networkd[68]: eth0: Link UP
Jun 28 08:08:56 Jellyfin systemd-networkd[68]: eth0: Gained carrier
Jun 28 08:08:56 Jellyfin systemd-networkd[68]: lo: Link UP
Jun 28 08:08:56 Jellyfin systemd-networkd[68]: lo: Gained carrier
Jun 28 08:08:56 Jellyfin systemd-networkd[68]: Enumeration completed
Jun 28 08:08:56 Jellyfin systemd[1]: Started Network Configuration.
Jun 28 08:08:56 Jellyfin systemd-networkd[68]: eth0: Lost carrier
Jun 28 08:08:56 Jellyfin systemd-networkd[68]: eth0: Gained carrier
Jun 28 08:08:56 Jellyfin systemd[1]: Starting Wait for Network to be Configured...
Jun 28 08:08:56 Jellyfin systemd[1]: Finished Create Volatile Files and Directories.
Jun 28 08:08:56 Jellyfin systemd[1]: Starting Network Name Resolution...
Jun 28 08:08:56 Jellyfin systemd[1]: Network Time Synchronization was skipped because of a failed condition check (ConditionVirtualization=!container).
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target System Time Set.
Jun 28 08:08:56 Jellyfin systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jun 28 08:08:56 Jellyfin systemd[1]: Finished Raise network interfaces.
Jun 28 08:08:56 Jellyfin systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target System Initialization.
Jun 28 08:08:56 Jellyfin systemd[1]: Started Daily apt download activities.
Jun 28 08:08:56 Jellyfin systemd[1]: Started Daily apt upgrade and clean activities.
Jun 28 08:08:56 Jellyfin systemd[1]: Started Periodic ext4 Online Metadata Check for All Filesystems.
Jun 28 08:08:56 Jellyfin systemd[1]: Discard unused blocks once a week was skipped because of a failed condition check (ConditionVirtualization=!container).
Jun 28 08:08:56 Jellyfin systemd[1]: Started Daily rotation of log files.
Jun 28 08:08:56 Jellyfin systemd[1]: Started Daily man-db regeneration.
Jun 28 08:08:56 Jellyfin systemd[1]: Started Daily Cleanup of Temporary Directories.
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target Timer Units.
Jun 28 08:08:56 Jellyfin systemd[1]: Listening on D-Bus System Message Bus Socket.
Jun 28 08:08:56 Jellyfin systemd[1]: Listening on OpenBSD Secure Shell server socket.
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target Socket Units.
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target Basic System.
Jun 28 08:08:56 Jellyfin systemd[1]: Started D-Bus System Message Bus.
Jun 28 08:08:56 Jellyfin systemd[1]: Remove Stale Online ext4 Metadata Check Snapshots was skipped because of a failed condition check (ConditionCapability=CAP_SYS_RAWIO).
Jun 28 08:08:56 Jellyfin systemd[1]: getty on tty2-tty6 if dbus and logind are not available was skipped because of a failed condition check (ConditionPathExists=!/usr/bin/dbus-daemon).
Jun 28 08:08:56 Jellyfin systemd[1]: Starting System Logging Service...
Jun 28 08:08:56 Jellyfin systemd[1]: Starting User Login Management...
Jun 28 08:08:56 Jellyfin systemd[1]: Started System Logging Service.
Jun 28 08:08:56 Jellyfin rsyslogd[114]: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd.  [v8.2206.0]
Jun 28 08:08:56 Jellyfin rsyslogd[114]: [origin software="rsyslogd" swVersion="8.2206.0" x-pid="114" x-info="https://www.rsyslog.com"] start
Jun 28 08:08:56 Jellyfin systemd-logind[115]: New seat seat0.
Jun 28 08:08:56 Jellyfin systemd-resolved[110]: Positive Trust Anchors:
Jun 28 08:08:56 Jellyfin systemd-resolved[110]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jun 28 08:08:56 Jellyfin systemd-resolved[110]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Jun 28 08:08:56 Jellyfin systemd[1]: Started User Login Management.
Jun 28 08:08:56 Jellyfin systemd-resolved[110]: Using system hostname 'Jellyfin'.
Jun 28 08:08:56 Jellyfin systemd[1]: Started Network Name Resolution.
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target Network.
Jun 28 08:08:56 Jellyfin systemd[1]: Reached target Host and Network Name Lookups.
Jun 28 08:08:57 Jellyfin systemd[1]: Finished Wait for network to be configured by ifupdown.
Jun 28 08:08:58 Jellyfin systemd-networkd[68]: eth0: Gained IPv6LL

----------========== HANG ==========----------

Jun 28 08:10:56 Jellyfin systemd-networkd-wait-online[105]: Timeout occurred while waiting for network connectivity.
Jun 28 08:10:56 Jellyfin systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE
Jun 28 08:10:56 Jellyfin systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'.
Jun 28 08:10:56 Jellyfin systemd[1]: Failed to start Wait for Network to be Configured.
Jun 28 08:10:56 Jellyfin systemd[1]: Reached target Network is Online.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/A...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/B...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/C...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/D...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/E...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/F...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/G...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/H...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounting /mnt/shares/I...
Jun 28 08:10:56 Jellyfin systemd[1]: Started Jellyfin Media Server.
Jun 28 08:10:56 Jellyfin systemd[1]: Starting Postfix Mail Transport Agent (instance -)...
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/A.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/B.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/C.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/D.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/E.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/F.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/G.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/H.
Jun 28 08:10:56 Jellyfin systemd[1]: Mounted /mnt/shares/I.
Jun 28 08:10:56 Jellyfin systemd[1]: Reached target Remote File Systems.
Jun 28 08:10:56 Jellyfin systemd[1]: Starting Deferred execution scheduler...
Jun 28 08:10:56 Jellyfin systemd[1]: Started Regular background program processing daemon.
Jun 28 08:10:57 Jellyfin systemd[1]: Starting Permit User Sessions...
Jun 28 08:10:57 Jellyfin systemd[1]: Started Deferred execution scheduler.
Jun 28 08:10:57 Jellyfin cron[242]: (CRON) INFO (pidfile fd = 3)
Jun 28 08:10:57 Jellyfin cron[242]: (CRON) INFO (Running @reboot jobs)
Jun 28 08:10:57 Jellyfin systemd[1]: Finished Permit User Sessions.
Jun 28 08:10:57 Jellyfin systemd[1]: Started Console Getty.
Jun 28 08:10:57 Jellyfin systemd[1]: Started Container Getty on /dev/tty1.
Jun 28 08:10:57 Jellyfin systemd[1]: Started Container Getty on /dev/tty2.
Jun 28 08:10:57 Jellyfin systemd[1]: Reached target Login Prompts.
Jun 28 08:10:57 Jellyfin postfix/postfix-script[275]: warning: symlink leaves directory: /etc/postfix/./makedefs.out
Jun 28 08:10:57 Jellyfin postfix/postfix-script[308]: starting the Postfix mail system
Jun 28 08:10:57 Jellyfin postfix/master[310]: daemon started -- version 3.5.6, configuration /etc/postfix
Jun 28 08:10:57 Jellyfin systemd[1]: Started Postfix Mail Transport Agent (instance -).
Jun 28 08:10:57 Jellyfin systemd[1]: Starting Postfix Mail Transport Agent...
Jun 28 08:10:57 Jellyfin systemd[1]: Finished Postfix Mail Transport Agent.
Jun 28 08:10:57 Jellyfin systemd[1]: Reached target Multi-User System.
Jun 28 08:10:57 Jellyfin systemd[1]: Reached target Graphical Interface.
Jun 28 08:10:57 Jellyfin systemd[1]: Starting Record Runlevel Change in UTMP...
Jun 28 08:10:57 Jellyfin systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jun 28 08:10:57 Jellyfin systemd[1]: Finished Record Runlevel Change in UTMP.
Jun 28 08:10:57 Jellyfin systemd[1]: Startup finished in 2min 366ms.

----------========== SNIP ==========----------
Bash:
x systemd-networkd-wait-online.service - Wait for Network to be Configured
     Loaded: loaded (/lib/systemd/system/systemd-networkd-wait-online.service; enabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Tue 2022-06-28 08:10:56 CEST; 58min ago
       Docs: man:systemd-networkd-wait-online.service(8)
    Process: 105 ExecStart=/lib/systemd/systemd-networkd-wait-online (code=exited, status=1/FAILURE)
   Main PID: 105 (code=exited, status=1/FAILURE)
        CPU: 2ms

Jun 28 08:08:56 Jellyfin systemd[1]: Starting Wait for Network to be Configured...
Jun 28 08:10:56 Jellyfin systemd-networkd-wait-online[105]: Timeout occurred while waiting for network connectivity.
Jun 28 08:10:56 Jellyfin systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE
Jun 28 08:10:56 Jellyfin systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'.
Jun 28 08:10:56 Jellyfin systemd[1]: Failed to start Wait for Network to be Configured.
Bash:
[Unit]
Description=Wait for Network to be Configured
Documentation=man:systemd-networkd-wait-online.service(8)
DefaultDependencies=no
Conflicts=shutdown.target
Requires=systemd-networkd.service
After=systemd-networkd.service
Before=network-online.target shutdown.target

[Service]
Type=oneshot
ExecStart=/lib/systemd/systemd-networkd-wait-online
RemainAfterExit=yes

[Install]
WantedBy=network-online.target
 
Danke. Aber eine Lösung hast du nicht? Mich wundert es nur warum es funktioniert wenn ich den LXC per Script erstellen lassen.
Dann werde ich wohl erstmal den funktionierenden Ubuntu LXC Container nehmen.
 
Hallo zusammen,

das Problem hatte ich auch gehabt.
Die Lösung (zumindest bei mir): Keine Modifikation in Sachen CPU und Arbeitsspeicher.
Ich vermute mal, dass er sich bei allen Einstellungen (ausser Standard 1core und 512mb), den Container nicht richtig erstellt.
Nachdem ich jetzt alles so belasse und nachher die Einstellungen verändere, hatte ich nicht einen Fehler mehr gehabt !
 
Hallo die IPV6 Einstellung werde ich heute Abend mal testen. Auch die Erstellung eines Containers mit Standard Einstellungen.

Hab jetzt erstmal alles mit Ubuntu LXC gemacht, so konnte man nicht arbeiten.
Komisch das dieses Problem nicht weiter bekannt ist.
 
Eine Umstellung auf SLAAC hat auch bei mir keine Abhilfe gebracht. Ich habe nun mal einen LXC Debian 11 Container erstellt und keinerlei Veränderung gemacht, also Standard gelassen. Mit diesem Container gibt es keine Probleme startet in sek. und ich bekomme auch einen Login Prompt.
 
Last edited:
Hi, ich hatte auch das Problem.
Bei mir ging es, das ich bei Erstellung des LXC die DNS-Server selbst bestimmt habe und die felder nicht offen gelassen habe.
 
Hallo Leute, ich bins noch mal.
Nein, das war es nicht. Aber nun konnte ich es ein paar mal nachstellen.
Bei mir hängt es, wenn ich für IPv6 auf DHCP stelle. Stelle ich zurück auf statisch, IPv6/CIDR auf "keine" ... dann geht es wieder. Mag sein, dass es daran liegt, dass ich im Netzwerk auch keinen DHCP habe, der IPv6-Adresse herausgibt.
Viel Erfolg!
 
Last edited:
  • Like
Reactions: Alianaa
Hallo Leute, ich bins noch mal.
Nein, das war es nicht. Aber nun konnte ich es ein paar mal nachstellen.
Bei mir hängt es, wenn ich für IPv6 auf DHCP stelle. Stelle ich zurück auf statisch, IPv6/CIDR auf "keine" ... dann geht es wieder. Mag sein, dass es daran liegt, dass ich im Netzwerk auch keinen DHCP habe, der IPv6-Adresse herausgibt.
Viel Erfolg!
Danke, das war wohl auch bei mir die ursache
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!