Might running an iperf3 on PVE itself concurrently with VM(s) be a good test? This would cover the client VMs running network traffic concurrently with for instance running backups of clients on the host. Also, running iperf3 in both directions...
Good afternoon
I have been using Packer to build images/templates for Ubuntu and now I am looking to expand it to Debian, Rocky Linux, FreeBSD and OpenBSD.
It feels like I can't be the first person doing this and does this forum have any good...
It sounds like your issue might be with network bridging or NAT configuration in your LXC setup. The 192.168.x.x IP is local, so external players can’t connect directly. Make sure your LXC container has a bridged network interface and verify that...
I have an msi z270 sli plus motherboard, the bios is locked by msi for gaming. The locked bios has no OC option, Without OC, I can not do pic passthrough.
Anybody know how to flush this board with the standard z270 bios so I can use OC?
Thanks...
Server at hetzner, and there is 2 vm servers, one is workign fine, one cant even ping anything
One problem is that there is 2 gateways, as Hetzner gives random ip addresses and gw (ip with mac )
proxmox machines settings:
source...
Hi
My ZFS tank is reaching its capacity limit and needs to be expanded.
It consists of 3 vdevs Raidz2 and 10 TB HDDs each.
I would now like to expand one of these vdevs from 8 x 10 TB to 8 x 26 TB.
Since I have never done this before, I wanted...
This seems to be a recurring issue, with many different solutions. But I haven't seen/found this yet:
Situation:
2 fully up to date PVE installations (8.4.14), in a "Datacenter" setup, but without any HA setup. (anyway, HA would require 3 hosts)...
Hi everyone,
I’d like to ask for advice on improving user experience in two Windows Terminal Servers (around 15 users each, RDP/UDP).
After migrating from two standalone VMware hosts (EPYC 9654, local SSDs) to a Proxmox + Ceph cluster, users...
What's your main time consuming part ? We have around 80 vm's on 10Gb nfs to 5 nodes, using maintenance mode and vm/lxc ha definitions come to auto live migratition between and all is done in 30min.
Hello,
maintenance mode is for HA managed resources but has nothing to do with cluster health in total or the behaviour of the reboots you're describing.
Did you have a look at the corosync.service logs on the non-patched servers for the time...
With the falling price of network cards, there was a desire to build a two-node cluster with two points of failure, with an NVME disk on each node for shared storage. This was supposed to be a cluster option for home or a small business. The...
After running the following I am wondering from where this raid is coming:
root@pve02:/# mdadm --detail /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Tue Jul 15 09:07:59 2025
Raid Level : raid1
Array Size ...
What are you trying to accomplish? I have a X520 card in my main Proxmox node and it shows up as two different NICs: enp1s0f0 and enp1s0f1. I have each of them assigned to different vmbrs. In my edge server (where pfSense lives) I also have an...
Hello all, I've been running PVE for a few years now, and I feel like I have yet to grasp the PROPER way to patch at reboot nodes.
I have a 3 (or 5) node PVE9 cluster of HP DL380 servers running CEPH. 99% are VMs. I have redundant corosync...
With the falling price of network cards, there was a desire to build a two-node cluster with two points of failure, with an NVME disk on each node for shared storage. This was supposed to be a cluster option for home or a small business. The...
Might running an iperf3 on PVE itself concurrently with VM(s) be a good test? This would cover the client VMs running network traffic concurrently with for instance running backups of clients on the host. Also, running iperf3 in both directions...