Awesome!! I successfully enabled VLANs on the OpenWRT device (Flint 2) and read your comment very very carefully and configured VLANs on the specific devices that I wanted segregated and left the rest alone as desired. Thank you for your valued input
Thanks, I figured out the problem. I had an ether2 without bridge ports. I had one with name, autostart, and VLAN awareness, but I didn't know bridge ports were necessary, so I set it to eno2 and it worked. And in the VM interface, I set the VLAN...
Hi, without more details from you, this error often arises with unstable I/O or storage paths (e.g. USB, remote mounts) and sometimes resolves by switching backup mode or retrying.
Hi, this is likely a Realtek driver issue — RTL8531B uses the r8152 module which can be unstable on Proxmox.
Check with ethtool -i enx00e04c680835; if wrong or failing, install r8152-dkms (or r8168-dkms) and test direct IP (no bridge) to...
Hi, -5 typically corresponds to -EIO (I/O error). That suggests the driver attempted to communicate with the device (e.g. via PCI config space, memory-mapped registers, etc.) but something failed at a low level (bus, hardware, MMIO access).
In...
Thanks so much in advance!
I read your message and it really gave me the nudge I needed.
I think I just need to follow your steps and not create a LAGG in OPNsense itself (next to the one in proxmox), pretty sure that’s where I messed up last...
Hi, looks like your installer is stuck during USB device initialization — the usb 1-5: device descriptor read/64, error -32 messages usually mean the installer can’t properly read from the USB stick or the USB controller is having trouble.
Try...
This isn’t a Proxmox issue — it’s between Zabbix and the Meraki API. The problem usually comes from API authentication, Org ID, or network/DNS reachability from your VM to api.meraki.com.
Start by testing API access directly from the VM with...
Thank you!
Above is my use case, since it resembles the current state i currently work, and believe that migrating from single docker node to a docker swarm cluster (keeping the same compose files properly amended for docker swarm will require...
Hi @FSNaval , i've missed out that you are using NVMe. If you are, using CephFS pool on NVMe will work ok even with metadata heavy. But i will still separate those containers that needs fast disk to use RDB
So... If you need shared files (many...
CephFS works, but it’s metadata-heavy—great for shared POSIX files, not ideal for DB-like/container write-intensive workloads without careful MDS sizing/tuning.
To save the trouble tuning and reduce risk, I'll use RBD volumes (fast, block-level).
I normally use NVMe running onboard without RAID because it is already on HA cluster mode for PVE host install for performance purpose.
Using SDCard sounds very risky to me.
ZFS “deadman” usually points to I/O stalls — the drives stop responding in time.
Are all disks connected through the chipset SATA ports (no port multipliers)?
Try checking for slow links with dmesg | grep ata or zpool status -v.
Also, any...
HI, stale node entries or mismatched SSH keys can definitely cause cluster sync chaos.
In addition, make sure the new node’s ring0_addr matches the existing subnet in /etc/pve/corosync.conf, and that /etc/hosts across all nodes correctly maps...
Hi, you will need to setup using Proxmox bridges and VLAN tagging. Please try this...
Create one bridge (e.g. vmbr0) for WAN (CHR ether1).
Create another VLAN-aware bridge (e.g. vmbr1) for LAN and VLANs (CHR ether2).
Attach VLAN interfaces (10...
Hi, you’re likely hitting a routing and proxy ARP issue caused by multiple gateways and Hetzner’s MAC-bound IP setup.
At Hetzner, each additional IP or subnet must be assigned to a unique virtual MAC and attached to the VM NIC — you can’t just...
Make sure your router → Proxmox host → LXC container network path is open. The usual blockers are service binding, firewalls, or missing NAT rules on the Proxmox host.
Ask them to check that the game server is listening on the IP, that...