There is another unexpected behavior in this context. Tested with a 3-node cluster 9.1.6 (nodes: ve1, ve2, ve3)
vm-101 was created with cloudinit on node ve1. ZFS replications were created. HA was enabled for vm-101. vm-101 was started. Replications for vm-101 are only created for...
I sent back an email yesterday but apparently i was signed out and the email was never sent here and i'm just realising since i came back for an update. Anyway....what i tried sending is that me and my colleague tried some additional steps yesterday and got it working(sort of) and that i'd put...
Hi,
so I reproduced the three node cluster and used VXLan with a router in the cluster. This allowed me to live-migrate machines.
I'll use the following abbreviations:
<cluster-gateway>: gateway the nodes use in their /etc/network/interfaces, e.g. 172.16.0.254
<ip-cn1>: Ip of cluster node 1...
Yes, normal as it can be.
Nothing suspicious, i have searched.
agent: 0
boot: order=virtio0
cores: 4
cpu: x86-64-v2-AES,flags=+hv-tlbflush
description: 192.168.0.110
machine: pc-i440fx-8.0
memory: 12288
meta: creation-qemu=8.0.2,ctime=1689610365
name: winprox
net0...
I have a setup that should be quite common:
- Legacy net with unmanaged switch behind a Fritzbox router
- a new managed switch with 3 Ports used for the beginning:
1: Uplink to Fritzbox, VLAN 1 (should be untagged),
2: Port to enp1s0 of my Proxmox host (WAN) also VLAN 1,
3. Trunk-...
oh ok...apologies
here you go:
auto lo
iface lo inet loopback
auto ens2f0
iface ens2f0 inet manual
auto ens2f1
iface ens2f1 inet manual
auto eno1
iface eno1 inet dhcp
auto eno2
iface eno2 inet dhcp
auto eno3
iface eno3 inet manual
auto eno4
iface eno4 inet manual
auto bond0
iface...
Oh, it looks very interesting! But actually I do not understard it:
root@zfs1:~# pve-zsync sync --source 100 --dest 192.168.1.2:tank/backup
It sends local CT 100 to remote... what? dataset? What if the local CT has multiple mountpoints? Will it merge them all? (That would be stupid...)
The router is 192.168.40.1 and the subnet is /24. The IP address of my phone does start with 192.168.40.x, I tried it on my computer (it still didn't work) but its connected through ethernet and for some reason my ISP app won't let me see info about devices connected through ethernet.
I'm not...
I am having an issue when backing up an unprivileged LXC container. The bootdisk size for the LXC is only 16GB, but when I try to back up the LXC (ID 100) it continues indefinitely until it uses all of the space on my "local" storage and then fails. I have no issues backing up my linux VM (ID...
Here's how I do it:
root@pve-01:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or...
This is fantastic and appears to be very similar to what I wish to achieve. I have a single onboard nat eno1 which i plan to have as the dedicated management port and a dell intel i350-t2 2 port pci-e. those are vmbr0 wan and vmbr1 lan. currently i dont have opnsense installed, but I will be...
Tell us about your hardware. Especially Proxmox. How many NICs do you have installed/available? I find if you want to run OPNsense (or in my case, the very similar pfSense) the easiest rout is if you have 2-4 NICs available. It can certainly be done with just one NIC, but it get exponentially...
Hallo ThoSo.
Vergessen wir das mit der HDD doch mal kurz. Ich möchte parallel zu meinem wunderbar funktionierenden Produktivnetz noch einen zweiten Zugang haben zu meinem Server. Eine "Direktleitung".
Ich habe einen Port am Proxmoxserver, und ich habe eine fest IP vergeben. Auf der...
Nachtrag: Hier ist noch die /etc/network/interfaces Datei.
Die ist von Server 1 statt Server 2, es ist aber exakt das gleiche Verhalten. Der Ping an einem Anschluss wird mit der MAC-Adresse der Bridge an einem anderen Anschluss beantwortet.
root@pve:~# cat /etc/network/interfaces
#...
Hallo zusammen.
Ich bin ziemlich frustriert nach gut 10 Stunden Kampf mit meiner Netzwerkkonfiguration und bekomme das Problem überhaupt nicht gelöst. Das Thema sollte eigentlich „einfach“ sein, und ich bin sehr unsicher, ob ich etwas grundlegend falsch mache oder ob irgendwo ein Bug vorliegt...
Environment:
proxmox-ve: 9.1.5
pve-manager: 9.1.5
lxc-pve: (latest as of 2026-03-14)
Storage: iSCSI LUN from NetGear NAS with LVM on top (vm-iscsi-lvm)
Problem:
When creating a new LXC container directly on an iSCSI-LVM storage, the container fails to start with the following error:
---------...
Thanks Lucas
For the systemctl status I get folder and service addresses, and PIDs for the 3 services you have listed
For ss-tapen I get as follows;
sshd 0.0.0.0:22
pvedaemon 127.0.0.1:85
pveproxy *:8006
Using pvesh get /cluster/status I get a table
IDNAMETYPEIPLEVEllocalnode...
Thank you for taking a look at my settings!
This is the interface of the working server
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please...
I have identical HOST servers that I have configured with a IP addresses on a vlan. They can ping each other. One of them can ping my secondary storage server. The other can not. When I ping from the HOST that can not, I get zero results from the ping. Interesting is that I can have a VM on that...