to apply this update myself before the patch is applied, this file should be added, yes?
10-pve-ct-inotify-limits.conf
with contents
fs.inotify.max_queued_events = 8388608
fs.inotify.max_user_instances = 65536
fs.inotify.max_user_watches = 4194304
It looks like to me @fabian this patch wasn't applied or was lost on my upgrade which intentionally increased the limit.
https://git.proxmox.com/?p=pve-container.git;a=commitdiff;h=02209345486fa0ddb3f69b29926dbb9210e15b41
root@pve1:/etc/sysctl.d# ls -la
total 28
drwxr-xr-x 2 root root 4096...
I was able to replicate it.
* can you enter these containers with `pct enter CTID`?
* what do you see in the debug logs? `lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log`
* what does the config look like? (`pct config CTID`)
I have currently corrected all my container VM's into running states, but will turn them off and back on to replicate this issue, one min please Oguz :D Thanks!
Hello all,
I've recently upgraded my Proxmox host from 5.4 to 6.0. I installed originally from Proxmox ISO.
Upgrade went fine, passed pve5to6 without warning.
On first boot my when I started a container the green |> icon was not showing up. I've got 35 containers or so. All of my...
Hello @dietmar
I have also encountered this exact issue. When the initial shell (js) was made available it worked for all vm's just like the noVNC but after the latest series of updates I can no longer use xterm.js for my KVM vm's but it still works for the proxmox host.
well, when you read what happens to the second node as it's added to the cluster, it inherits your storage config from the first node plus what was installed on it. This all becomes visible to the "cluster" but you can't use pve2's local-zfs on pve1
As having been through this exact same dilemma myself, here's what I ran into and my conclusion.
-Requirements I wanted to run a single server and have no downtime of my VMs for updates of the hypervisor (proxmox). For this I wanted Live Migration, no need for HA in my use case.
-my setup...
@hape Proxmox 4 uses corosync for cluster management (HA) which requires 3 nodes to make quorum. Proxmox 3 you could make a 2 node cluster. Proxmox 5 you'll again have the ability to make 2 node (with a third whitness device maybe a raspberry pi) cluster again but it's still in development and...
ensure the QLX driver and Spice agent are installed in your VM's.
Refer to this documentation for further question (search for resolution for hints)
https://www.spice-space.org/docs/manual/
Morgan @wolfgang
I believe the OP has some confusion about which is the best to use currently.
On the enterprise repo, we still have access to DRBD files.
If we want to make a 2 node cluster (with third witness proxmox server) using DRBD as the storage, should we use the packages in the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.