i am aware of that. its a lab .
but this do not answer the question , is 2/2 suppose to still serve on a 4 nodes / with 1 osd each ? i think answer is yes.
even with 3/2 is it suppose to keep serving right ?
maybe it was just a remote control issue .
in case 1 OSD / Node reboot , and we still have 3 node / OSD online
will the pool lock i/o if configured at 2/2 ?
im asking cause my collegue told me that during his test during a node reboot the VMs was not responding to i/o anymore with 2/2 but correctly with 2/1 on a 4 node lab
we are having no issue for 1.5 year with CHR running and full bgp table
multiqueue was our limitation with CHR with proxmox at the begun so i encourage you giving a new try they are amazing
, i seen the forum this morning and had replyed to it.
im not seeing this elsewehere on internet...
@mira can you tel me if this can be related to my issue here ?
https://forum.proxmox.com/threads/mikrotik-vm-lose-network-with-32-multiqueue-vitrio-on-migration.122722/
strangly the vm boot withtout any error with 32 vmultiqueue at origin so its confusing ,
hi i think i finaly isolated a issue with a VM losing network connectivity when we move it from host to the other
all runing 7.3.4 , the mikrotik have 32 cpu so we have configured 32 multiqueue.
yesterday i downgraded for fun from 32 to 16 multiqueue has another mikrotirk with 16 cpu and 16...
hi i cant get InfluxDB to connect to our influxdata.com after spending hours i always get a connection timeout 500
your help would be appreicated !
( i tried InfluxDB influxDB listener and influxdb linstener V2 )
im probably not writing the correct information at API as a suffix after .com...
hi in case of a NIC or switch failure , is i possible to define 2 network ?
( we use bonding no worry , but in case also of LACP issues )
example
cluster_network = 192.168.1.0/24 192.168.2.0/24
public_network = 192.168.4.0/24 192.168.3.0/24
other question , per my understanding Monitor will...
yes with dozens of TB of backup i will have probaly no issue purchasing SSD sas array :)
what do you use for SSD ? i read alot of people who just dont care about SAS at all as PBS and truenas and ceph can auto heal
im running a pbs with 12x 3tb 6gb sas HBA and a GC take around 45 min. i dont have any pbs ssd but i think its not bad at all. with 580 backup on it
2023-02-10T03:47:16-05:00: Removed garbage: 14.545 GiB
2023-02-10T03:47:16-05:00: Removed chunks: 10999
2023-02-10T03:47:16-05:00: Pending...
Yes go in your mikrotik check under bridge setting if vlan aware is enable and define your vlan as tag under bridge section
Maybe they are as untag by mistake too
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.