After further investigation this seems to have been an issue with the vlan configuration, I am reinstalling with the correct vlan configuration and marking thread as solved. if I still have issues after the reinstall, I will re-open.
PVE Version: 8.2.4
Hi,
I have setup 3 Proxmox PVE servers (all with identical hardware) running PVE 8.2.4
After setting them up (fresh installs, nothing else done other than check for updates) I created Linux VLAN on each server in the 192.168.*.* address space - these networks all came up...
I have a Ceph pool in my datacenter and master node that I am unable to delete even though I have uninstalled and purged ceph on all my nodes. Could someone please tell me how to get rid of the pool from the WebGUI?
Sorry for the delayed response I was traveling. That seems to have worked, so thanks :). It would be nice to have this available in the LXC setup as maybe a field to add optional configuration settings for the container. It is as easy as adding a text box to the final stage of the LXC setup...
So I am running a 3 Node cluster and I have a bunch of containers running on them with HA.
The problem is all containers must connect to an OpenVPN network in order to be accessible via our company VPN (they are not accessible from the outside world) but when they reboot /dev/net/tun is not...
I am using Ceph and yes I had the container on HA I was pretty sure I had set the container to use the Ceph pool but for some reason it is on local-lvm storage on node 3 - so either I screwed up and didn't select the Ceph storage pool or something funky happened during HA trying to move it to...
So I have my cluster up and running then the other day one of my LXC container had an issue and crashed - I am not sure how but now the configuration file is on one node (node 2) and the raw container image is on another (node 3). I am unable to start the container on either node (presumably...
OK so this seems possible to do. I tried everything I could including setting up a new bridge and setting OpenVPN up in Bridge Mode (tap instead of tun) but it wouldn't let me bridge to a Proxmox vmbr interface.
In the end I have had to add each container to the VPN network as a client in...
OK so I am trying to do some network wizard and failing.
I currently have a 3 node cluster with the following configuration for each node.
eth0 is public IP address
eth1 is cluster network
eth2 is Ceph network
eth 0-3 are all physical NICs attached to managed switch with separate vlans for...
Soooo, I have been running my unicast Proxmox cluster for almost 2 years now with zero downtime and have been very pleased with it, but there are some things I would love to change and I now have the opportunity to do so.
We are about to get either GigE or 10GigE fiber installed in our office...
I have IPv6 working on my Host Node and a test container.
iperf3 results on the host node show around 1Gb/s up and down.
However, on the container I am having huge problems with the network speed.
I can ping6 ipv6.google.com no problems (as well as other ipv6 machines)
I can traceroute6 no...
I have three servers in a proxmox cluster:
pmn1
pmn2
pmn3
Each server has 2 physical NICs:
eth0
connected to the public IP address
eth1
connected to an RPN
RPN also has a 4th server - an RPN VPN server which is not part of the Proxmox cluster and is not in my control (it is to enable me to...
ok I fixed this - the clue was in the critical error about the local corosync.conf being newer. I used pmxcfs -l to mount the fuse system locally, changed the config_version in totem to 9 (because I couldn't remember which edit I was on), removed the pmxcfs lockfile and restarted pve-cluster...
So first of all I tried to setup a cluster and failed because my hosting provider does not support Multicast, so I decided to try and follow the information to configure a unicast cluster and I am having a nightmare.
Here is my /etc/hosts (public IP address & domain removed)
root@pmn1:~# more...
OK it looks like Multicast isn't working...
How do I revert my servers 2 and 3 back to life and remove them from the cluster without having to reinstall?
I have been pulling my hair out today and have reinstalled Proxmox on 3 different servers about 4 times each so far because not only does it not work but I can't seem to revert back either.
I have 3 servers (nodes) running Proxmox 4.2-23
All 3 servers are on an RPN network as well as having...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.