I would start off by saying to make sure that your system specs meet the minimum requirements.
https://www.proxmox.com/en/proxmox-ve/requirements
Secondly, make sure you install the latest version 5.1 using the ISO located at...
The only thing that comes to mind to me is did the BIOS update disable one of the cores, however you would expect that then the number of threads to cut in half (i.e.: 4). I'd suggest that you look over your BIOS more to ensure that all the settings are correct. Does the BIOS show both cores...
Yes, You can change the IP address of your Proxmox host. You will need to make sure that the /etc/hosts file is updated to be correct, also you will need to modify your /etc/pve/corosync.conf (possibly) to reflect this change as well.
The /etc/pve directory is managed via the corosync process...
I am trying to add some new disks to a brand new server that is part of the cluster. When I try to add an OSD i get the following errors. This is running the very latest 5.1-51 with the very latest Ceph 12.2.4
root@virt04:~# pveceph createosd /dev/sdc
file '/etc/ceph/ceph.conf' already exists...
I am running the very latest updates of Proxmox 5.1 and the server does have KSM enabled, I had a VM kernel dump on me, and I noticed in the logs I am getting "page allocation failure" messages. Overall the RAM usage seems to be OK and the server is only running about 35 VMs each with 4GB of...
I would like to submit 2 requests for Proxmox to make that I think would be nice small additions.
- Now that Proxmox shows IP of VM's with the qemu-guest-agent installed, would be nice to be able to in the VM view see the IP addresses in a list and sort the VMs by IP address.
- Add function to...
I am running a dedicated 3 node Ceph cluster that is a member of my normal Ceph cluster I just don't put any VMs on them. However Even on 10GB network my performance seems to be suffering big time. The Ceph has 15 disks with a combination of SSD and spinning disks. However I seem to still not...
I have the very latest of Proxmox installed and I went to look at my Ceph Health and realized I have some serious inconsistencies which I seem to be having issues resolving. I haven't had any strange issues like this before with Ceph, so wondering if someone can help me. I found an article where...
I have a template that is sitting on Ceph Storage, and it appears I made a Linked Clone on one of my VM's but I'm having a hard time to find which VM is using this image as it's base. Is there any easy way to find any Linked Clones, and then to convert them to Full Clones?
Sorry for the delay, I added the 3 Ceph nodes into my cluster and it works great. I just ensure that I don't put any VMs on those nodes as I want to keep them dedicated for Ceph tasks. I was going to use Croit but didn't want to spend the money and it relies upon DHCP for Croit to work.
I want to first start off by mentioning that I am a Red Hat Certified Architect, a rather short list of those in the world who have achieved this, and I mention it because being an RHCA I drink the "Red Hat" cool-aide if you will. Over the past year I have gone down a very similar path where I...
I am getting ready now to setup 3 x Dedicated Ceph Nodes that will be used for Proxmox 5.0. I'm trying to figure out what is the best way to install it and manage it long term? I am familiar with ceph-deploy, and I have installations of Ceph where I am doing Hyper-converged. One of my thoughts...
I am using a Dell Compellent SAN that has multiple controllers and I would like to get Proxmox to log into both sides of the SAN. I see in the interface it only seems to want to allow me to log into one portal. Do I need to modify the storage.cfg file in /etc/pve ? Or do I just need to add the...
I found that the issue is related to 'showmount -e {tintri_ip_address}' doesn't respond from our NFS appliance. It doesn't appear to be a firewall issue as we opened up all traffic in both directions. I modified the NFSPlugin.pm and changed the check routine to not run that command as the...
I installed Proxmox 5.0-32 this morning onto a new cluster that I built. We are trying to mount it to a Tintri NFS appliance that only supports NFS version 3. If i manually mount it from the server to a test directory using NFS v3 it mounts just fine, however Proxmox doesn't seem to want to...
I found a bug in the latest 5.0-32 of PVE. I had created VM's local on disk (SSD) drive. I then migrated the disks to CEPH data store and the configuration see's the change, however when I try to do LIVE migration it still thinks there is a reference to the old SSD drive.
The thing I noticed is...
I noticed that Proxmox mainly has LXC support which doesn't allow for container migration (online/hot), which I know LXD supports. Has Proxmox considered supporting LXD within the product?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.