We have a productive cluster
suddenly networking on vmbr0 stopped on 2 member nodes node 1 is ok.
how to get this back to work ? any hints are welcome !
auto lo
iface lo inet loopback
iface enp69s0f0 inet manual
mtu 9000
iface enp204s0f0 inet manual
mtu 9000
iface...
indeed, snapshot was taken with install iso attached from local storage ...
so we have to take care to first dismount local cd roms prior to take a snapshot ....
this is new for me ...
yep, a snapshot is in place ... but snapshots are on vmpool in ceph ...
logs not avail pvemanager gui is blocking this ...
regards Gerhard
btw ... how to edit my signature, i can't find any option in my profile....
pveversion -v
proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager...
I have no glue why live migration is NOT possible, vm has NO local resources.
any help is appreciated
qm config 100
agent: 1
boot: order=scsi0;net0
cores: 2
memory: 16384
name: AD
net0: virtio=E2:9D:97:20:F8:8F,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win8
parent: voruseranlegen...
Hi I have a worst case,
osd's in a 3 node cluster each 4 nvme's won't start
we had a ip config change in public network, and mon's died so we managed mon's to come back with new ip's.
corosync on 2 rings is fine,
all 3 mon's are up
osd's won't start
how to get back to the pool, already...
this was a match winner :) thx for your responses !
iperf -c 10.101.200.131 -P 4 -e
------------------------------------------------------------
Client connecting to 10.101.200.131, TCP port 5001 with pid 3252
Write buffer size: 128 KByte
TCP window size: 325 KByte (default)...
nope is not ... how to fix this ?
BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-5.4.65-1-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
my fault ... found it.... mtu was 1512 ... set to 9000....
iperf -c 10.101.200.131 -P 4 -e
------------------------------------------------------------
Client connecting to 10.101.200.131, TCP port 5001 with pid 18556
Write buffer size: 128 KByte
TCP window size: 325 KByte (default)...
after firmware update of mellanox cards, still not near 100Gbit/s :(
iperf -c 10.101.200.131 -P 4 -e
------------------------------------------------------------
Client connecting to 10.101.200.131, TCP port 5001 with pid 5645
Write buffer size: 128 KByte
TCP window size: 85.0 KByte...
I just found a way to accomplish firmware update, without messing with driver update not coming form proxmox repro!
this is much more straight forward :)
wget -qO - http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox | apt-key add -
download package from mellanox...
Hi , I have just nearly the same hardware.
switch sn 2100
Date and Time: 2020/10/15 16:00:25
Hostname: switch-a492f4
Uptime: 54m 24s
Software Version: X86_64 3.9.0300 2020-02-26 19:25:24 x86_64
Model: x86onie
Host ID: 0C42A1A492F4
System memory...
i removed snapshot from vm, and made a try with rdma ... same results....
how to mange now this ?
start command for vm are manged by proxmux gui ....
I thought defining rdma for ceph is a transparent action, how have you manged this within proxmox ?
I have no glue, i'm lost in a maze ...
why are other kvm's running ? only this 2016 with snapshots is troublesome .... without install rbd nbd ....
I will consolidate the snapshot and give it another try this afternoon.
seems to be ok, but one windows2016 vm does not start all other kvm and containers start and behave expected ...
perhaps because a snapshot has been made before ?
/home/builder/source/ceph-12.2.5/src/msg/async/rdma/Infiniband.cc: In function 'void Infiniband::set_dispatcher(RDMADispatcher*)'...
thx ! this was a match winner, i stored them in /etc/systemd/system .... my fault apparently ..
YMMD :)
Normal TCP/ip:
Total time run: 60.019108
Total writes made: 41911
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 2793.18
Stddev...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.