We have a three node cluster, the storage for the vms is ceph. I have migrated lot of physical server to pve with clonezilla, i have also converted round about 15 vmware vms to pve. In the past without issues.
Now we had the problem (the third problem/server after a while) - an ubuntu...
I 'm facing a new problem.
I would like to update my version of proxmox (currently 5.4) on a server where I have only one node currently.
By following the documentation I could see that a very useful command allows to see the compatibility "pve5to6".
After execution of this...
I see that I've strange problem with SNMPd after my node with ID 1 stops working.
When node 192.168.7.52 goes down it disables binding SNMPd on other machines of cluster.
On 7.53 node you can see that snmpd start working as usually but i see nothing more:
Stopping service takes more...
I get this latest error:
/lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory
I wanted to restart the server and turned it off. I can't open it back now. And I deleted the backup this morning (unlucky.. did hdd cleaning)
How do I open this container on Proxmox?
I have a new LXC (104) which I created yesterday in a 3 node cluster and it was running perfectly for about 18 hours. This morning I migrated the running container to another node and it stopped working, but proxmox still says it's running. I have tried restarting it, restarted the node...
I am trying to add some new disks to a brand new server that is part of the cluster. When I try to add an OSD i get the following errors. This is running the very latest 5.1-51 with the very latest Ceph 12.2.4
root@virt04:~# pveceph createosd /dev/sdc
file '/etc/ceph/ceph.conf' already exists...
So i tried to make my first LXC container, got the ubuntu 16.04 image from Proxmox. When i try to start it it give me this:
Job for firstname.lastname@example.org failed because the control process exited with error code.
See "systemctl status email@example.com" and "journalctl -xe" for...
today I install an lxc Container with ubuntu 16.04 and do an dist-upgrade (after added the ubuntu 18.04 sources.list files), now I am not able to start the container anymore because I get the Error Message
root@michael-server:~# lxc-start -F -n 100 --logfile=lxc100.log...
root@node02-sxb-pve01:~# apt-get update
Err:1 https://packages.cisofy.com/community/lynis/deb stretch InRelease
Failed to connect to packages.cisofy.com port 443: No route to host
Err:2 https://enterprise.proxmox.com/debian/pve stretch InRelease
Failed to connect to enterprise.proxmox.com...
Hi guys! I got into troubles now: I have my Proxmox Installation on two ssd's (raidz mirror) and one hdd that was in separate zpool, and used for backups. I got one VM on that server, with two disks inside it: system disk which was located on raidz ssd's and one volume that was used for backups...
Hi all, I'm new to PVE and the forums, please go easy on me. I've searched for this issue here and in google and none of the solutions are working for me or maybe not explained well enough. I'm having issues with creating containers on my test system. I installed PVE on a mirror ZFS pool and I...
I have a problem to start VM after moving the machin from a stopped node with command (mv)
this is the message error
Executing HA start for VM 100
Member pve2 trying to enable pvevm:100...Could not connect to resource group manager
TASK ERROR: command 'clusvcadm -e pvevm:100 -m pve2'...
I'm fighting with this during the last 2 days without any success. The situation up to now is as follows:
1. I've had 2 nodes joined in a cluster and I needed to remove 1 of them and join another machine in its place, because of a hardware upgrade. The 2 nodes use 1 OpenVPN tunnel...