OK
Yes I decided to go with VxLAN and it seems to work alot better than OVS/GRE with my implementation described here:
https://forum.proxmox.com/threads/openvswitch-bridge-for-cluster-vms-across-private-network.76866/#post-342238
This seems to work beautifully!
Just to feedback vxlans work beautifully! Definitely better than openvswitch (ovs) bridge with GRE tunnels!
For my cluster I have configured vmbr2 in /etc/network/interfaces as you mentioned on each host:
auto vmbr2
iface vmbr2 inet manual...
Since I don't have a subscription at the moment I had to do this first:
Make a copy of the pve-enterprise.list file
run: cd /etc/apt/sources.list.d/
run: cp pve-enterprise.list pve-no-subscription.list
run: nano pve-enterprise.list and comment the line using # at the beginning, save...
Thanks a lot! This is super helpful.
I've started off by removing the ovswitch bridges from my nodes and tried to install ifupdown2 but getting this error?!
# apt install ifupdown2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following...
I am trying to install ifupdown2 on my proxmox hosts but when I try to do it with apt I get the follow:
# apt install ifupdown2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed...
Thanks. That future SDN feature looks like it will be very useful!
In terms of GRE tunnels I came across the need for STP for what I wanted to do and in the Proxmox documentation that to enable RSTP using:
up ovs-vsctl set bridge ${IFACE} rstp_enable=true
But here...
Hi did you get anywhere with this? I'm trying to do the same thing in Proxmox with ovswitch to create failsafe communications between VMs on different pve hosts ( https://forum.proxmox.com/threads/openvswitch-bridge-for-cluster-vms-across-private-network.76866/ )
I'm new to this but I've seen...
I have set up a cluster with VMs on different host nodes: h1, h2, h3, h4
I have used a OpenVswitch bridge (vmbr2) defined on h1:
auto vmbr2
iface vmbr2 inet manual
ovs_type OVSBridge
post-up ovs-vsctl add-port vmbr2 gre1 -- set interface gre1 type=gre options:remote_ip=''ip...
Yes the VMs share the same ZFS (2 mirrored SAS disks) pool on a Dell R630 server (hba mode), it most likely is this I suppose.
The server has 2 x 12 Cores so I would doubt it's CPU?
In terms of VMs on a private network (bridge) with no ports/slave I assume sending data via scp on the private...
I have a VM with a 42 GB hard disk (format raw, zfs file system) which I'd like to reduce to 32 GB.
I thought I could use parted to reduce the partition on the VM first and then decrease the PVE size but I'm struggling to find a method to do this?
Is there any method / documentation available?
I am testing some file transfer speeds in PVE 6.2-4.
I have got two VMs on a host both using same bridge (with no ports/slave) for private network (192.168.1.X).
They are both using VIRTIO network device, a zfs pool for harddisk space and have 3GB or RAM and 1 socket, 2 Core CPU.
The file...
well my initial plan was to use SLURM sitting on top of Proxmox for job submission/management so I may be able to use that in some way by defining different compute node pools confined to different physical servers
When you put it like that it makes perfect sense!!!
Is there any software you know of that could run on top of VMs to duplicate a program running on a VM on one physical node to a VM on on another physical node so if one goes down there is redundancy. I have a feeling this is some kind of...
See references [1] and [2]
Do the following on all nodes unless stated.
(a) Prepare disks
(Assumes the disk is /dev/sdd)
fsidk -l /dev/sdd
pvcreate /dev/sdd
vgcreate vg_proxmox /dev/sdd
lvcreate --name lv_proxmox -l 100%vg vg_proxmox
mkfs -t xfs -f -i size=512 -n size=8192 -L PROXMOX...
I created a GlusterFS cluster across 3 nodes successfully, create a VM on one node and used the GlusterFS for storage. I set the VM as HA with 1 migrate/boot. I simulated a node failure (turn off / not shutdown). The VM migrates to another node but restarts rather than continues so all CPU/RAM...
Not exactly.
I have configured a 5 node PVE cluster with VMs accessible via NAT. I created ZFS storage on each PVE node (using some of the hdisks) for VM disks. I then set up VM replication from node 1 to 2, 2 to 3 ... 5 to 1 every 10 mins and using HA could get VMs to migrate and (re-)start...
Thanks. I am very impressed with Proxmox so far. If we end up using it for production I would really like my organisation to get at least a basic subscription to support the great work you guys are doing! Thanks for now.
Thank you. I'll give that a try. Are there any other storage options (https://pve.proxmox.com/wiki/Storage) which could be used for " (virtual) hyper-converged"?
I have configured a 5 node PVE cluster with VMs accessible via NAT. I created ZFS storage on each PVE node (using some of the hdisks) for VM disks. I then set up VM replication from node 1 to 2, 2 to 3 ... 5 to 1 every 10 mins and using HA could get VMs to migrate and (re-)start automatically...
Thanks for your answer.
One of the disadvantages of RAIDZ for me is that you cannot extended the RAIDZ (number of drives used) on a node but with a zfspool across the physical cluster nodes (with storage replication) I think I can just add more physical disks to any node and the zfspool at any...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.