I don't think we're quite at consensus. Actually, a VM can have a very different overcommit policy to the the hypervisor. It is a similar problem to thin disk provisioning, but in that model the resource is promised by the admin and potentially consumed by resources outside the virtual host; the...
I suspect we're talking at cross purposes here.
I have assigned the container a limited amount of memory. But because memory overcommit is using the default heuristics, the OS is allowing applications in the container to malloc() more memory than is available, hoping that it the applications...
I want to disable memory overcommit on an lxc container (this runs multiple greedy cron jobs which will grab a lot of memory and use it all) however when I try to set this on the command line, the lxc host tells me that /proc is a read-only filesystem. There are several other containers runing...
My Ubuntu 20.04LTS containers are losing the domain part from the hostname when the container is (re) started.
The impacted system (currently a 2-node cluster, 3rd node to be added) is running Proxmox 6.1-7 and was provisioned a few days ago. This has several Ubuntu 20.04LTS containers and VMs...
I'm setting up a cluster and finding the documentation on shared storage sparse and lacking depth. Where people have managed to set something up, they seem to think having the files (eventually consistent) in more than one location means the job is done.
So I can have some assurance about...
Buying new hardware for this - planning to spend around 7000 GBP (9000 USD) for 3 x pizza boxes and a 10G switch. Each box with 32G / 256G NvMe and 500Gb spinning rust. Why does the hardware affect the choice?
I am setting up a tiny cluster - I only need more than one machine to provide fault tolerance (will be using 3 for quorum / simpler upgrade and maintenance cycles). Given that I have 3 physical hosts which is very redundant, I'm not planning on mutliple PSU's/NICs/RAID. To provide resillience...
I have several virtual disks (connected to conainers and VMs) in raw format and I want to convert them to qcow2 (I was still learning when I made them!).
Google took me to https://forum.proxmox.com/threads/migrating-from-raw-lvm-to-qcow2.19960/ where someone suggested using the web front end as...
The /etc/hosts file on node 1 still had an entry linking the host name to the original address. After manually correcting this, then running `systemctl restart pve-cluster` (which took a nail-biting long time to return) the file /etc/pve/.members was showing the expected addresses and all the...
Just do it.
I'm currently trying to consolidate a messy enterprise network on a reverse proxy (to minimise the the effort for failover and centralize policy management - I have **lots** of public IP addresses). It just works (mostly).
There is one site which is giving me some grief with 421...
After some more experimentation, `pvecm updatecerts` did not resolve the problem.
However, the first node was initiall configured on a different sub-net and moved into this network before the cluster was built. Despite `pvecm status` reporting the expected address:
Membership information...
I have a 4 node cluster (planning to go to an odd number soon). From the web console
on node 1, I can see the status of all nodes/run a shell on all nodes,
but on node 2, 3, 4 I cannot connect to node 1: Connection Timed Out (595) / Communication Failure(0)
There does not appear to be any...
I am about to build a proxmox cluster, initially with 3 nodes, which will use shared NFS storage.
The storage will be on a dedicated network, while each node will also have a connection to a LAN shared with other devices. I am expecting that the storage network will have more available...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.