We have a three node Proxmox cluster that we are in the process of decommissioning as we have a new 5.1 cluster that we have migrated the majority of the machines over to. However, for a while we need to keep one of the nodes running with one of the containers it has. But, we also want to take...
So, we have a strange problem under proxmox 4.1 (lxc).
We have 16 GB in config for container:
memory: 16384
I can see the same ammount through Web UI.
But inside container only 2 GB is used, if I run free -m:
root@hkv24:~# free -m
total used free shared...
Upgraded 4 of 7 nodes today only to discover than especially two VMs running (Palo Alto - VM200 FWs) use much more CPU than when on pve 4.1 :(
Pic 1 here shows VM usage last 24 hour and the jump when migrated onto 4.2.22 around 17:00, the last high jump is me introducing more load on the FW...
hello everybody!
so, i have an hp 8200 elite small form factor. i5's. 500gb hd. 16gb ram.
question one:
i was trying to virtualize os x 10.10 on it, then a friend of mine said that the hardware won't support a type 1 hypervisor. is this true?
question two:
so, then i went back to reinstall...
Hello,
I have planned to migrate my old install of proxmox 3.4 to 4.1 in the coming days.
But unfortunately, we have some NFS mounted in v.4 under 3.4
The test server that we have in 4.1 does not seem very happy when we try to mount or force mount NFS v4…
When mounting manually we have a...
Hey
we are running a pve cluster (4.1). Because one machine totally stopped working, I removed it from the cluster with pvecm delnode . pvecm status tells me that the node is removed. However when managing the cluster via webinterface it still shows up in the "server view" perspective as an...
Hi.
I am sorry for my bad english :(
I have 2 cluster on the Proxmox VE ver. 3.4 and 4.1
All equipment is absolutely identical in both clusters
All nodes has 2*X5650 Xeon, 96Gb RAM and InfiniBand Card: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT / s - IB QDR / 10GigE] (rev b0)
Both...
Hello,
For years I lived with the idea that the only underling storage for PVE should be hardware-backed array of disks, which can self-heal, self-cached, and which is guaranteed to be always-in-good-shape.
But now I see recent PVE versions are with ZFS out of box (but as I played with some of...
We're testing Proxmox 4.1 in consideration of upgrading our 3.x cluster. All the API operations I try (even the initial authentication) return a 500 status. Is there something we have to enable on the host?
Thanks!
I recently had some server downtime for my office Proxmox server over Christmas and took the opportunity to upgrade to 4.1 from 3.4, and everything seemed to go alright. However, now everyone's back in the office, i'm noticing a severe slowdown across the whole system.
I set up a Zabbix...
Hello all,
I am attempting to familiarize myself with Proxmox/Ceph/LXC so that I can hopefully use it to replace my ESXi environment. I have set up a nested virtual environment that should be sufficient and followed the configuration guides for a 3-node cluster and Ceph server, but ultimately...
hello folks, just upgraded from 3.4 to 4.1 and everything went rather smoothly. honestly i was expecting a late night but alas, well written directions produce excellent results.. never the less i have some issues specific to some new LXCs migrated from openVZ..
each of the 5 (restored) LXC's...
Hi:
I have searched the forum, but couldn't find any post related to my problem on Proxmox VE 4.1.
After installing Proxmox VE 4.1, I ran apt-get update and apt-get dist-upgrade. During the upgrade process I received the following message: "Possible missing firmware...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.