Once proxmox is booted and active, I can start the container manually in the GUI, or in the foreground via CLI. I'm not sure how to "start the container manually in the foreground with debugging on boot," but here's the full output file for LXC104 in the foreground when I run lxc-start -n 102 -F...
So, interesting development... originally, I had all of my containers set to wait on LXC100 (Plex) starting, prior to starting. LXC100 was priority 1 at boot, and all the other containers were priority 2. For troubleshooting purposes, I removed priorities entirely, and left all containers at...
Yeah I saw that line too and was disappointed that it didn't mention what file, but I wasn't sure if that would truly cause the start failure or not. Furthermore, what would be different about starting it from the GUI versus starting from a boot? (since starting it from the GUI works).
Output...
With that possibility in mind, I tried adding a 30 second delay for the containers to start and it didn't help.
Here's the syslog portion during startup, the LXC relevant items start at line 1200: https://pastebin.com/YrCEDXTF
I'm on a fresh install of Proxmox, with all packages updated on the host:
proxmox-ve: 5.4-2 (running kernel: 4.15.18-18-pve)
pve-manager: 5.4-11 (running version: 5.4-11/6df3d8d0)
pve-kernel-4.15: 5.4-6
When my node boots up, all of my LXCs fail to boot (classic systemd 'exit-code 1`), but my...
Here are my package versions for my current install of proxmox. I added a CIFS share through the GUI used for backups and container data, but when I try to restart my node, the containers that have the share mounted prevent the node from shutting down, giving the classic systemctl error exit...
Here are my package versions for my current install of proxmox. I added a CIFS share through the GUI used for backups and container data, but when I try to restart my node, the containers that have the share mounted prevent the node from shutting down, giving the classic systemctl error exit...
Thanks for the reply.
I added the two export line sin /etc/profile. They show up when I run locale, bit the systemd still fails to start, citing the same missing locales. Is it possible I have to add them in some other manner?
Jun 18 10:35:06 traktarr python3[581]: You might be able to resolve...
I'm trying to run a python script in an ubuntu 16.04 LXC, but it fails with the following message:
This system supports the C.UTF-8 locale which is recommended.
You might be able to resolve your issue by exporting the
following environment variables:
export LC_ALL=C.UTF-8
export...
Out of desperation, I disabled the bind mount paths in the container config file and then it started right up. So it was missing the host-side path on a bind mount, it seems.
I did.
deb http://ftp.debian.org/debian stretch main contrib
# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve stretch pve-no-subscription
# security updates
deb http://security.debian.org...
As I said in my original post, I:
updated/upgraded the proxmox host
backed-up each container
Re-installed the proxmox host (and updated/upgraded)
Tried to restore the container
All in the span of 12 hours.
So I don't know how an out of date system is even a possibility here. I'm on v5.1-46 ...
I recently had to replace some drives, so I took the opportunity to do a fresh install of proxmox. I:
updated/upgraded all the packages on the proxmox host
backed up each container to a separate NFS storage share
Installed the new OS drives
Re-installed proxmox
Tried to restore the containers...
Is there a command to accomplish this from a live rescue disc? I booted into gparted, didn't see any ext4 filesystem resizing options, so I tried the `resize2fs /sda3/dev/mapper/pve-root` command provided in your link, but my arguments or paths must not be valid.
For what it's worth, yeah, me too.
Single node setup on pve 5.1 with the 4.13 kernel. All of my LXCs remain responsive and reachable, it's just the node itself for whatever reason.
Hrmm, looks like someone already beat me to it: https://bugzilla.proxmox.com/show_bug.cgi?id=1373
Good to know about the datacenter view though, thanks.
In a container summary view, the Proxmox GUI displays a lot of useful resource reporting, like CPU, RAM, and Swap consumption. It also shows the allocated total space for the root disk of the container, but I don't see a data point for the amount of root disk that is consumed/used. Why is that...
As I said in my post, smartctl tests returned no bad blocks. I encountered `zpool status` showing a degraded rpool while setting up the zfs-zed email notifications.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.