Hello,
I seem to have an unusual problem which happened shortly after upgrading to PVE 7.
I have an application which creates LXC containers on a PVE cluster over API. Recently, it started getting 500 status code in response to POST /api2/json/nodes/node1/lxc. The response body does not have...
In most cases where IPs are routed through the PVE host, the bridge-nf-call-* settings do not need to be enabled for PVE Firewall to work.
However, we have recently switched to using a vlan-aware bridge on the host and configure the VLAN ID directly in Proxmox for each container/VM interface...
Ever since Proxmox 6 came out, we moved towards using Proxmox with ZFS across all physical servers. That is because with the new corosync, we don't need to have multicast traffic between those servers. We have successfully virtualized servers that were previously on bare metal and even started...
We just experienced a nasty crash whenever kernel touched our ZFS pool. This occured after we replaced one faulty drive and resilvered but in fact had nothing to do with that.
The crash bug occurs when ZFS is trying to reapply ZIL due to a previous power loss. The issues linked below document...
The problem is that when starting LXC container first time, creating LXC container OR after PVE host is rebooted (and therefore OVS configuration is reset, since PVE does not use persistent OVS DB), the virtual interface plugged into OVS port (vmbr0 here with VLAN tag 4001) does not work.
Here...
I am trying to define features to be e.g. keyctl=1 on LXC container's config, either while creating it or after creation.
I can see this working from my web browser when I'm logged in as root@pam. However, when I call this from a new user created in the pve realm with all permissions assigned...
We have a server that is typically running for 6 months between reboots to upgrade kernel. In other words, it's very stable and has been running for 4 years like that. Recently, we've had an unexpected hangup/crash-reboot. Upon that, a new kernel was loaded but the situation re-occured. Then a...
Howdy,
I have given Proxmox on ZFS a test run recently (having used Proxmox for many users without ZFS).
I was hoping that Proxmox on ZFS would allow LXC containers to have their disk allocation both extended and shrunk. However, the Proxmox GUI does not seem to allow that.
Running the...
Today we had one of the Proxmox hosts stop with the following message:
The host was still running kernel 4.2.6-1-pve (package version: 4.2.6-26) because we did not reboot into the newer kernel that was installed already (4.2.6-28).
I'm just posting this in hopes to verify if that is a known...
My use case involves having LXC containers connect to 169.254.169.254 (inside the guest).
This IP (169.254.169.254) is added to "lo" interface on host (ip a add 169.254.169.254 dev lo).
On Proxmox 3, this worked well. I had a server running on host, binding to 169.254.169.254:80 and guests were...
Hello,
I have came across another problem, one of very few we had with Proxmox so far ;)
We want to setup a cluster and we do it with the use of VPN. It's not exactly as in the ned's tutorial you link on the wiki because the VPN is exclusively for the communication between nodes and everything...
Hello,
I have just installed Promox 2 on a new dedicated server on top of a clean Debian Squeeze.
The proper kernel is running (Linux hn2.dondaniello.com 2.6.32-6-pve #1 SMP Mon Dec 19 10:15:23 CET 2011 x86_64 GNU/Linux), all packages are installed.
The problem is that VMs don't work, while...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.