Just to add, this appears to be present in the base Debian Bookworm as well so rather than a Proxmox issue its seems to be a Debian issues. To get DHCP6 to work over a Openvswitch bridge I had to do similar to above, but I ran it in the interfaces file:
# The primary network interface
auto...
So in typical fashion I found a work around for this fairly quickly after. Restarting Openvswitch solves the issue so I had added a cron job to restart the openvswitch-switch 1 min after reboot. Seems to fix it:
@reboot /bin/bash -c "sleep 60; /usr/share/openvswitch/scripts/ovs-systemd-reload"
Hi All,
Thanks for any help in advance
I am having an odd one whilst trying to upgrade my cluster to proxmox 8, I have Openvswitch configured using vxlan to create virtual networks using a star topology. An example of my config is:
Center Node (192.168.1.40, pve 7.4):
allow-ovs vmbr15
auto...
Please ignore me, this was an issue with exports on the nfs server, we are using btrfs subvolumes to seperate out data and have multiple shares coming from the same filesystem.
Setting fsid=0 and fsid=1 on the exports for the different subvolumes fixed the issue
Hi There
Thanks for any help in advance, I am trying to setup a backup pool in a Proxmox 4.1 cluster (all hypervisors are running 4.1), when I create the pool I set the only content to be VZDump backup file (even though I am using Vms rather than CTs) and when I try and run a backup I get...
Is this likely to continue with new versions? Previously I could clear virtual machines off a node, upgrade the node and migrate back with no downtime, it seems this is no longer possible.
If this is true then I am stuck at 4.1 for the forseeable future.
Hi All
Just to finish off this post, I resolved the issue by restarting corosync on all the nodes but pve1-dh4 and it all started working again (members lists updating and whatnot).
Thanks for the help with this. Much appreciated.
pve-bhf-dh4 is non-existent at the moment due to a few logistical issues, there will be one eventually!
pve3-dh4 has has been online for about a month but we just started populating it, it was previously a proxmox 3 box that was working okay.
root@pve3-dh4:~# pvecm status
Quorum information...
Sorry! Here are the pvecm status outputs
root@pve1-dh4:~# pvecm status
Quorum information
------------------
Date: Fri May 20 15:59:54 2016
Quorum provider: corosync_votequorum
Nodes: 5
Node ID: 0x00000002
Ring ID: 728
Quorate: Yes
Votequorum information
----------------------
Expected...
Hi There
I am getting an error of No Such Cluster node when I am trying to migrate to or from a node called pve3-dh4. I checked out /etc/pve/.members on each node and they are not in sync. One node has
{
"nodename": "pve1-dh4",
"version": 11,
"cluster": { "name": "virtus-v4", "version": 6...
Is there a timeline for Ceph cluster names to be supported in the main release? I ask as I have a use case for multiple separately named Ceph clusters.
Thanks
Hi There
I have a few Proxmox Clusters with up to 12 nodes in them and want to take advantage of ZFS and l2arc. I have PCI-E SSD's configured for l2arc and ZIL and spinning disks for main storage, I then share zpools using NFSv3 (with hard,intr options) and VM disk images are created as files...
This turned out to be an issue with my raid card. 3ware 9650SE has options for enabling jbod but it basically doesn't work very well. I have ended up setting all my drives to single and all of a sudden its working.... dang raid cards!
Hi There
Thanks in advance for any help. I am in the process of Migrating to Proxmox 4.1 but after re-installing a number of times from the ISO I get a grub error "couldn't find a valid DVA". I am trying to install using a ZFS Raid 1 which is exactly the same setup I had in Proxmox 3.4. Anyone...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.