I think the problem here is the kernel in the virtualisation.
Proxmox is running Linux and pfSense is running FreeBSD which is different. So there's essentially an emulation layer happening. What you could do to prove this is to run a firewall with a Linux kernel to prove it. Something like...
Ive come across this post https://forum.proxmox.com/threads/kernel-panic-whole-server-crashes-about-every-day.91803 which has some options, I ended up adding the amd64-microcode package and added a line to the drives in storage.cfg, seems to have fixed my issue which was linked to IO on my disks
I've also been hitting this issue the past few days. I've been plotting using a VM onto an NVMe drive, whenever I've tried to copy off the NVMe onto spinning rust, whether thats an NFS target or reattaching the drive to a single VM or LXC, I've been getting kernel panics between 500Mb and 12Gb...
I seem to be hitting this issue also. I've also got a an MSI B450-F Tomahawk, 32GB RAM, Ryzen 3900X. I was running an LXC with an NVMe as a mountpoint and was hitting issues when copying over to a spinning disk in another VM. I then had a VM with the NVMe and seemed fine to put data on it...
Got it sorted.
I looked up the manual for corosync and the authkey needed to match across the cluster nodes. Picked one authkey and copied across to the other node. then restarted pve-cluster and corosync services and then the log was showing the cluster was up!
Dec 3 09:18:07 auricom...
I have a 3 node cluster with a VM that temporarily comes online just for quorate duties.
It
I have an old PC that Im turning into a server. I removed the VM from the cluster then tried to add the new server. It failed and seemed to break the cluster. At first the 2 remaining members wouldnt see...
I think I have similar symptoms to you. I have 7 LXC containers running, a few are as old as the LXC gui mods, the others are the last 6 months or so and they all get backed up every morning at 0200. However I noticed that some were failing, after looking into it the ones that are failing have...
I'm trying to run SABnzbd inside a CentOS7 LXC and hitting this problem. Thought I had it sorted by adding the following:
echo "export LANG=en_US.UTF-8" >> /etc/profile.d/locale.sh
echo "export LANGUAGE=en_US.UTF-8" >> /etc/profile.d/locale.sh
source /etc/profile.d/locale.sh
Now when I do a...
I have a 2 node cluster (3 if you class the VM I run to break the tie in the case of one going bad) as a lab and a load of VM's. Essentially PROD in my case as I have Plex, and a few containers for Wiki/Ansible.
I use the "cluster" address on the same range as my backend app traffic. At the...
As above, install base debian and install proxmox ontop, have been working fine since I posted this and even recently did the upgrade to v5.0 which went smoothly.
get a cheap switch which is managed and can handle VLANs. Have both your cluster members trunked down to the switch and then either have one bridge that is VLAN aware (I have this). Or have a bridge per network and then attach that via the vlan tag
I just upgraded my 2-node cluster to v5.0-32. I've been planning a cheeky workaround the 2-node cluster for a while so finally spun up a VM on my PC and added a 3rd node just incase one goes down (so I dont get quorum errors and cant start VM's!).
I used:
qm migrate 250 --with-local-storage...
Can you do the VLANs inside of the VM? I have a pfSense VM running and pass it 2 interfaces, eth0 and eth1.
eth1 is the vmbr for the WAN which only pfSense has an IP for (or IPv6). and eth0 is on a vmbr with VLAN awareness enabled.
Inside of the VM I setup VLANs which are on eth0. This means...
Remember that with bonds/LAG/LACP 1+1 != 2. Similarly 1+1+1+1 != 4. It chooses a link through an algorithm and uses the same link for a hash of same Src/Dst IP and ports.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.