Recent content by Gerhard W. Recher

  1. vmbr0 suddenly stopped working on 2 nodes out of 3 Urgent request for help

    We have a productive cluster suddenly networking on vmbr0 stopped on 2 member nodes node 1 is ok. how to get this back to work ? any hints are welcome ! auto lo iface lo inet loopback iface enp69s0f0 inet manual mtu 9000 iface enp204s0f0 inet manual mtu 9000 iface...
  2. cannot migrate vm with local cd/dvd but i have NO local resources

    indeed, snapshot was taken with install iso attached from local storage ... so we have to take care to first dismount local cd roms prior to take a snapshot .... this is new for me ...
  3. cannot migrate vm with local cd/dvd but i have NO local resources

    yep, a snapshot is in place ... but snapshots are on vmpool in ceph ... logs not avail pvemanager gui is blocking this ... regards Gerhard btw ... how to edit my signature, i can't find any option in my profile.... pveversion -v proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve) pve-manager...
  4. cannot migrate vm with local cd/dvd but i have NO local resources

    I have no glue why live migration is NOT possible, vm has NO local resources. any help is appreciated qm config 100 agent: 1 boot: order=scsi0;net0 cores: 2 memory: 16384 name: AD net0: virtio=E2:9D:97:20:F8:8F,bridge=vmbr0,firewall=1 numa: 0 onboot: 1 ostype: win8 parent: voruseranlegen...
  5. Desaster recovery of a ceph storage, urgent help needed

    Sorry for cross posting, but got no response to my original posting... Original post any help would be highly appreciated Gerhard
  6. Crushmap vanished after Networking error

    Hi I have a worst case, osd's in a 3 node cluster each 4 nvme's won't start we had a ip config change in public network, and mon's died so we managed mon's to come back with new ip's. corosync on 2 rings is fine, all 3 mon's are up osd's won't start how to get back to the pool, already...
  7. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    this was a match winner :) thx for your responses ! iperf -c 10.101.200.131 -P 4 -e ------------------------------------------------------------ Client connecting to 10.101.200.131, TCP port 5001 with pid 3252 Write buffer size: 128 KByte TCP window size: 325 KByte (default)...
  8. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    nope is not ... how to fix this ? BOOT_IMAGE=/ROOT/pve-1@/boot/vmlinuz-5.4.65-1-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
  9. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    my fault ... found it.... mtu was 1512 ... set to 9000.... iperf -c 10.101.200.131 -P 4 -e ------------------------------------------------------------ Client connecting to 10.101.200.131, TCP port 5001 with pid 18556 Write buffer size: 128 KByte TCP window size: 325 KByte (default)...
  10. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    after firmware update of mellanox cards, still not near 100Gbit/s :( iperf -c 10.101.200.131 -P 4 -e ------------------------------------------------------------ Client connecting to 10.101.200.131, TCP port 5001 with pid 5645 Write buffer size: 128 KByte TCP window size: 85.0 KByte...
  11. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Numa is on, raiser card is properly connected.... lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s)...
  12. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    I just found a way to accomplish firmware update, without messing with driver update not coming form proxmox repro! this is much more straight forward :) wget -qO - http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox | apt-key add - download package from mellanox...
  13. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    yep, signature is another cluster... this one: 3 nodes, pve 6.2.-1 iso install with all patches applied supermicro 2113S-wn24rt amd epyc 7502P 2,5Ghz 32c/64t 512GB mem ddr4-3200 cl22 2 samsung pm981 nvme m.2 (raid-1) system zfs 4 nvme 3,2tb samsung pm1725b 2,5'' U.2 dual port broadcom...
  14. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Hi , I have just nearly the same hardware. switch sn 2100 Date and Time: 2020/10/15 16:00:25 Hostname: switch-a492f4 Uptime: 54m 24s Software Version: X86_64 3.9.0300 2020-02-26 19:25:24 x86_64 Model: x86onie Host ID: 0C42A1A492F4 System memory...
  15. Ceph blustore over RDMA performance gain

    i removed snapshot from vm, and made a try with rdma ... same results.... how to mange now this ? start command for vm are manged by proxmux gui .... I thought defining rdma for ceph is a transparent action, how have you manged this within proxmox ? I have no glue, i'm lost in a maze ...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!