no, you don't. (it's only if you want to add an ip for your proxmox host in this vlan).
you're setup is fine.
Are you sure that your physical switch port for proxmox is correctly in trunk mode and allow the vlan 66 ?
be carefull that if you use vlan-aware vmbr0, and tag vlan on bond0.X directly, you can use the same vlan for vms, because the traffic will never reach vmbr0. (it'll be forced to go to bond0.x )
AFAIK, the clean official way is:
you need to create new monitors and delete the olders after. (and both need to be able to communicate during the transitions).
But I think it's possible to dump, modify && reinject the monmap, but it's not easy...
if you use vlan-aware bridge, you should use <vmbr.X> instead "bond0.X" for your vlans ip address.
auto vmbr0
iface vmbr0 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
auto vmbr0.200
iface vmbr0.200 inet static...
Here a summary to help people:
bridge-disable-mac-learning 1
This is disabling unicast flood && bridge learning on the bridge. (proxmox register manually vm mac on ports)
That mean that traffic incoming to the server is not forwarded to the vms if the destination mac is different than the vm...
ok, got it!
As far I remeber, currently, we don't create the range in netbox ipam (or other external).
We only create the subnet if it's not existing in ipam.
I think it's because we don't have a specific api call when add/del a range. (it's just an value option of the subnet), so we should...
>>We need to access a variety of VMs from the host system and the other way around, e.g. for the monitoring system, local apt repository, LDAP >>Server and some more.
do you really need to access to vms from each nodes of your pve cluster ?
I mean, it's really a problem from the exit-node...
it can be done in /etc/network/interfaces
no vlan
-----------
iface vmbr0
address ....
vlan aware on specific vlan
---------------------------------------
iface vmbr0.X
address ....
on a sdn vnet directly
-------------------------------
iface <vnetid>
address .....
if you have min_size=2 for your pool. (and size=3), if you loose 2 disks (ON DIFFERENT SERVERS), you'are going to have readonly PG. (until you have replicated them).
if you use min_size=1, it's still work with 1 disk.
I don't think it's reated to numa. I have some virtual ceph cluster, where numa is not present, I have exactly same warning message , and ceph osd numa-status is also empty, and everything is working fine.
Maybe you could try to increase debug level in ceph.conf : debug_osd = 20 for...
It's not related.
set_numa_affinity is done once at osd service start.
it's seem than you're osd is restarting multiple is loop, then after 5 restart, it's going in protection to avoir infinite loop and impact on the cluster.
do you have logs in /var/log/ceph/ceph-osd.*.log ?
Well, technically it's possible to implemented ovs plugin. (I'm currently helping a user to create an ovn plugin).
But for my point of view, openflow sdn controllers are dead since end of 201x. Because they are centralized controllers and generaly are not "standard" and can't be integrated...
this patch has been sent
https://bugzilla.proxmox.com/show_bug.cgi?id=5324
but not yet applied.
you can use a vlan-aware vmbr0, sdn + mtu is working fine with it currently.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.