I'm running a PVE 5.2 cluster with 4 nodes. The cluster is attached to a SAN, an HP P2000 G3 iSCSI. VMs are hosted on the SAN.
The first controller of the SAN failed. Everything is running on the second controller, but I can't manage PVE anymore.
Although VMs are running, it seems that...
I have four nodes backing up to PBS on a regular basis. One of the nodes has five VM's, only one of which is backing up. The other four fail. All other VM's on all other nodes back up without issue. I really have no idea what could be causing it, but it's important that I fix this...
Hi to everyone!!
I'm installing on a brand new Dell T40 with 2x1TB disks a Proxmox 6.2.
Before install Proxmox for a USB stick i go to the Dell configuration Utility and create a Raid1 then reboot and put the USB and run proxmox. The first problem is the installer detect both disks and not...
While struggeling with PCI passthrouhg (it does work, eventually) i found something peculiar.
Sometimes the VM would not boot because the PCI.id for a device was alread in use. This was because the pci-id for the vga and the audio were written as the same. So instead of 0000:17:0.0 and...
i noticed one of my lxc containers was down,
and it failed to start with he following error:
/usr/bin/lxc-start -F -n 143
lxc-start: 143: conf.c: run_buffer: 352 Script exited with status 13
lxc-start: 143: start.c: lxc_init: 897 Failed to run lxc.hook.pre-start for container "143"
Pve host use LVM
Established a new ZFS, the following error occurred while migrating the virtual machine
mount: /var/lib/lxc/104/.copy-volume-2: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.
We have a three node cluster, the storage for the vms is ceph. I have migrated lot of physical server to pve with clonezilla, i have also converted round about 15 vmware vms to pve. In the past without issues.
Now we had the problem (the third problem/server after a while) - an ubuntu...
I 'm facing a new problem.
I would like to update my version of proxmox (currently 5.4) on a server where I have only one node currently.
By following the documentation I could see that a very useful command allows to see the compatibility "pve5to6".
After execution of this...
I see that I've strange problem with SNMPd after my node with ID 1 stops working.
When node 192.168.7.52 goes down it disables binding SNMPd on other machines of cluster.
On 7.53 node you can see that snmpd start working as usually but i see nothing more:
Stopping service takes more...
I get this latest error:
/lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory
I wanted to restart the server and turned it off. I can't open it back now. And I deleted the backup this morning (unlucky.. did hdd cleaning)
How do I open this container on Proxmox?
I have a new LXC (104) which I created yesterday in a 3 node cluster and it was running perfectly for about 18 hours. This morning I migrated the running container to another node and it stopped working, but proxmox still says it's running. I have tried restarting it, restarted the node...
I am trying to add some new disks to a brand new server that is part of the cluster. When I try to add an OSD i get the following errors. This is running the very latest 5.1-51 with the very latest Ceph 12.2.4
root@virt04:~# pveceph createosd /dev/sdc
file '/etc/ceph/ceph.conf' already exists...
So i tried to make my first LXC container, got the ubuntu 16.04 image from Proxmox. When i try to start it it give me this:
Job for firstname.lastname@example.org failed because the control process exited with error code.
See "systemctl status email@example.com" and "journalctl -xe" for...
today I install an lxc Container with ubuntu 16.04 and do an dist-upgrade (after added the ubuntu 18.04 sources.list files), now I am not able to start the container anymore because I get the Error Message
root@michael-server:~# lxc-start -F -n 100 --logfile=lxc100.log...
root@node02-sxb-pve01:~# apt-get update
Err:1 https://packages.cisofy.com/community/lynis/deb stretch InRelease
Failed to connect to packages.cisofy.com port 443: No route to host
Err:2 https://enterprise.proxmox.com/debian/pve stretch InRelease
Failed to connect to enterprise.proxmox.com...
Hi guys! I got into troubles now: I have my Proxmox Installation on two ssd's (raidz mirror) and one hdd that was in separate zpool, and used for backups. I got one VM on that server, with two disks inside it: system disk which was located on raidz ssd's and one volume that was used for backups...
Hi all, I'm new to PVE and the forums, please go easy on me. I've searched for this issue here and in google and none of the solutions are working for me or maybe not explained well enough. I'm having issues with creating containers on my test system. I installed PVE on a mirror ZFS pool and I...
I have a problem to start VM after moving the machin from a stopped node with command (mv)
this is the message error
Executing HA start for VM 100
Member pve2 trying to enable pvevm:100...Could not connect to resource group manager
TASK ERROR: command 'clusvcadm -e pvevm:100 -m pve2'...