I am trying to define features to be e.g. keyctl=1 on LXC container's config, either while creating it or after creation.
I can see this working from my web browser when I'm logged in as root@pam. However, when I call this from a new user created in the pve realm with all permissions assigned...
There is a new problem on this same server that we had. Maybe related to this, maybe not.
We were running fine on 4.15.18-8-pve and when we upgraded to 4.15.18-9-pve, the network does not work at all - no errors in console or logs specific to network but there is absolutely no network...
This is the script: https://github.com/DevelopersPL/pkgbuild/blob/master/ipset-nuclear/ipset-nuclear
You only need ipset installed to run it. It's running on Arch Linux unprivileged container with 4 GB RAM, 2 GB SWAP, RAM is initially at 1.5 GB use (the specs shouldn't matter). Host is at 21/32...
Don't really see much, that's all I see:
# cat /proc/27695/stack
[<0>] 0xffffffffffffffff
Arch Linux.
I also noticed that on the host, the memory does in fact approach 100%. Then I have SWAP (half of RAM) so it doesn't crash right away.
The interesting thing is that I can't tell why the...
I've tracked this down to a specific ipset command running in unprivileged LXC container.
On the host, when it's not crashing, it "looks" like this:
Message from syslogd@pve at Jan 1 19:25:29 ...
kernel:[86763.415164] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [ipset:11633]
Message...
And a new one from last night:
root@pve:~# journalctl -b -1 --no-pager --since '2018-12-29 07:00:00'
-- Logs begin at Thu 2018-12-06 17:24:36 UTC, end at Sat 2018-12-29 15:13:01 UTC. --
Dec 29 07:00:00 pve systemd[1]: Starting Proxmox VE replication runner...
Dec 29 07:00:01 pve CRON[4976]...
Let me try that again. It looks like the journalctl "pager" is causing the "skipped". But still, the messages sometimes appear to be mangled in the journal itself.
Here's the oldest one (2018-12-10):
root@pve:~# journalctl -b -4 --no-pager --since '2018-12-10 17:35:00'
-- Logs begin at...
Thanks for suggestion but that does not seem to be working:
root@pve:~# apt install r8168-dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
r8168-dkms
0 upgraded, 1 newly installed, 0 to remove...
I'd love to upgrade BIOS but that may be impossible. The latest update is from 2015 anyway. It's an OVH dedicated server.
CPU: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz
Motherboard: ASUSTeK COMPUTER INC. Product Name: P8H77-M PRO Version: Rev X.0x
We have a server that is typically running for 6 months between reboots to upgrade kernel. In other words, it's very stable and has been running for 4 years like that. Recently, we've had an unexpected hangup/crash-reboot. Upon that, a new kernel was loaded but the situation re-occured. Then a...
Howdy,
I have given Proxmox on ZFS a test run recently (having used Proxmox for many users without ZFS).
I was hoping that Proxmox on ZFS would allow LXC containers to have their disk allocation both extended and shrunk. However, the Proxmox GUI does not seem to allow that.
Running the...
I will be happy to sell you "LTS" version so you can continue using the same Proxmo version for 20 more years (as if I could prevent you from doing so...). Just remember that I don't actually plan on doing anything for you (besides sending an invoice). But everything is working smoothly for you...
Today we had one of the Proxmox hosts stop with the following message:
The host was still running kernel 4.2.6-1-pve (package version: 4.2.6-26) because we did not reboot into the newer kernel that was installed already (4.2.6-28).
I'm just posting this in hopes to verify if that is a known...
pa657,
I did a quick test for you on my Proxmox machine in OVH.
To my Debian-based LXC container, I added another interface (net1, eth1) with a different MAC and another IPv4 (I was adding it when the container was stopped).
After I started the container it successfully replies to pings to...
My use case involves having LXC containers connect to 169.254.169.254 (inside the guest).
This IP (169.254.169.254) is added to "lo" interface on host (ip a add 169.254.169.254 dev lo).
On Proxmox 3, this worked well. I had a server running on host, binding to 169.254.169.254:80 and guests were...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.