Search results

  1. A

    Ceph very low performance 12MB/s

    If you get a third node it also speeds up that considerably. Also for small clusters increasing the PG count helps alot on the same principle as more nodes = more disks that can participate in satisfying the request..
  2. A

    Is it possible: PVE + shared storage + HA + replication to remote site?

    You should check ceph's rbd image mirror, or possibly gluster as well.
  3. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    Yes the TP Link switch was actually setup for that MTU. I do not remember where I got MTU 1592 from, it came up as I was troubleshooting.
  4. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    Still find it weird that the mellanoxes can do an MTU of 1592 and up just fine tho, have run MTU 9000 on all my NICs since I first started with proxmox and now with Proxmox 7, the ifupdown2 package gets corrupted on 2 different nodes, and all nodes exibit MTU issues with e1000e driver and NIC!
  5. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    And just like that! root@pve22:~# ip link set dev eno1 mtu 1472 root@pve22:~# ip link set dev vmbr0 mtu 1472 root@pve21:~# ip link set dev eno1 mtu 1472 root@pve21:~# ip link set dev vmbr0 mtu 1472 root@pve21:~# scp 192.168.1.22:/etc/ceph/ceph.conf ceph.conf ceph.conf 100% 2989 4.0MB/s...
  6. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    Found something interesting in rgds to the MTU thank you spirit! root@pve22:~# cat /var/log/syslog | grep PMTUD Aug 29 17:09:28 pve22 corosync[2944]: [KNET ] pmtud: PMTUD link change for host: 3 link: 0 from 469 to 1461 Aug 29 17:16:17 pve22 corosync[2944]: [KNET ] pmtud: PMTUD link...
  7. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    root@pve21:~# ping -Mdo -s 1560 192.168.1.22 PING 192.168.1.22 (192.168.1.22) 1560(1588) bytes of data. ^C --- 192.168.1.22 ping statistics --- 10 packets transmitted, 0 received, 100% packet loss, time 9215ms Thank you @spirit - seems this doesnt work on any of my eno1's but works just fine on...
  8. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    root@pve22:~# pve-firewall status Status: disabled/running root@pve22:~# pve-firewall stop root@pve22:~# pve-firewall status Status: disabled/stopped
  9. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    ip addr | grep mtu 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1592 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000 3: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1592 qdisc mq master...
  10. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    Dmesg: https://pastebin.com/5Fee9iWg root@pve21:~# lspci -nn | grep 0200 00:19.0 Ethernet controller [0200]: Intel Corporation 82579LM Gigabit Network Connection (Lewisville) [8086:1502] (rev 05) 06:00.0 Ethernet controller [0200]: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0...
  11. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    Thinking this might be a driver issue or some other packages that are corrupted as I have the same behaviour on all three nodes even tho I replaced the switch as well..
  12. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    OK, slowly slicing away at my forehead as I keep banging my head against the wall, I actually managed to get a ceph quroum by switching from the network card that is for OSDs and MONs to the backend network interface.... Now checking firewalls even tho I am able to telnet into both v1 and v2...
  13. A

    [SOLVED] Proxmox 6.4 -> 7 upgrade: Broken network and now ceph monitors

    And now I am unable to get the ceph-mons, possibly related to this network issue after Proxmox 6.4 -> 7 upgrade or the fact that all the LVM IDs seem to have changed. Any help is appreciated as this upgrade has been a truly horrible experience this far with no access to my data.. The monitor...
  14. A

    [SOLVED] Lost both 1gb and 10gb network after 7.0 upgrade

    And now I am unable to get the ceph-mons, possibly related to this network issue or the fact that all the LVM IDs seem to have changed. Any help is appreciated as this upgrade has been a truly horrible experience this far with no access to my data..
  15. A

    [SOLVED] Lost both 1gb and 10gb network after 7.0 upgrade

    Funny that the second node has exactly the same problem! ifupdown2 package is broken!
  16. A

    [SOLVED] Lost both 1gb and 10gb network after 7.0 upgrade

    Yes, resolved by booting into Debian 11 LiveCD, downloading the ifupdown2 package to the proxmox bootdisk, booting into proxmox and overwriting the broken ifupdown2 install. Hints where given at systemctl status networking
  17. A

    [SOLVED] Lost both 1gb and 10gb network after 7.0 upgrade

    Booted into a Debian 11 LiveCD and my networking works just fine! Most likely this has to do with a broken install of the ifupdown2 package.
  18. A

    [SOLVED] Lost both 1gb and 10gb network after 7.0 upgrade

    Since upgrading to 7.0 I lost all connectivity on 2 nodes (not the third tho even tho they were upgraded all at the same time!). Troubleshooting steps: * hwaddress was added to the bridges * tried auto/hotplug in intrefaces as well * i CAN bring up the interfaces with ip link set en0 up but...
  19. A

    150mb/sec on a NVMe x3 ceph pool

    Hi all, All very valid points which is why I ran the tests on the PVE boxes themselves through a Mikrotik 10gbit switch. Network is not breaking a sweat, and not CPU either. I have ofcourse tested the network to be at real 10gbit speed beforehand with iperf and RAM disks transfers. For network...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!