Search results

  1. D

    [SOLVED] how to troubleshoot dropped packets

    Did not fix the problem. ``` net_packets.vmbr0CHART inbound packets dropped ratio = 0.17% the ratio of inbound dropped packets vs the total number of received packets of the network interface, during the last 10 minutesALARM vmbr0FAMILY
  2. D

    [SOLVED] how to troubleshoot dropped packets

    Having same problem - testing fix - will report back in 48h.
  3. D

    Proxmox Remote Vzdump

    Great suggestion, thank you for sharing - just used this method to deal with a couple of servers in an old cluster I wanted to decommission.
  4. D

    cephFS not mounting till all nodes are up (7.3.6)

    The reverse of that. I'm asking if having all (5) monitors listed in the mount statement is causing the problem when (1) is missing.
  5. D

    cephFS not mounting till all nodes are up (7.3.6)

    Will look later today. Curious, if it's the actual mount statement that's the problem. For example, once it's mounted, it lists all (5) hosts. Could it be any single host missing prevents the mount? 198.18.53.101,198.18.53.102,198.18.53.103,198.18.53.104,198.18.53.105:/ 50026520576...
  6. D

    cephFS not mounting till all nodes are up (7.3.6)

    I'll be rebooting the cluster again today (it is a lab after all) but here's the "current" status with all (5) nodes up and everything happy. ~# ceph fs status cephfs - 7 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active mx4 Reqs: 0 /s 83.2k 48.0k...
  7. D

    qemu-kvm-extras / qemu-system-arm / raspberry pi under ProxMox2.x

    5 years later -- there's a massive list of "Types" that pmx/qemu/kvm can support - are we any closer to arm64 support in the GUI?
  8. D

    cephFS not mounting till all nodes are up (7.3.6)

    Each node has both a MON and an MDS - so in the above example we are 4/5 MON and 4/5 MDS (with only 2 MDS needed) ... hence the puzzle ...
  9. D

    cephFS not mounting till all nodes are up (7.3.6)

    5 node deployment in lab, noticed something odd. Cephfs fails to mount on any node nodes until *ALL* nodes are up. IE 4 of 5 machines up, cephfs still fails. Given the pool config of cephfs_data and cephfs_metadata (both 3/2 replicated) I don't understand why this would be the case. In theory...
  10. D

    New tool: pmmaint

    Nice work - seems like something that would be great integrated into the PMX GUI - have an "evacuate node" right-click-menu option. Spacing needs a little help for larger hosts, this is my lab: |Memory (GB) hostname | total free used | CPU pmx1...
  11. D

    Multiple cephfs ?!

    .... so i tried it again today, and **magic** -- it created the mount matching the name under /mnt/pve - and mounted on all clients. Thanks pmx team - well done.
  12. D

    Enable MTU 9000 Jumbo Frames

    agreed. However, pre-up can be useful if you want to make sure the individual members of the bond are brought up before the bond is, for example.
  13. D

    Proxmox scalability - max clusters in a datacenter

    Honestly, as much as I love/use proxmox, for the scale you're talking about, openstack might be a better fit - lots of multi-site tools available for that env, today.. just get your checkbook out..
  14. D

    Multiple cephfs ?!

    So this feature appears to be functional, (or mostly so) in 7.x - you can create a secondary cephfs, it creates the data/meta pools, finds open mds servers, and starts.. only it's not mounted any where? I would have assume it was created/mounted under /mnt/pve - but no dice. I'm guessing doing...
  15. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    The root problem here appears to have been that I *ever* overrode netnames, because ever after, it wants to use existing or db, as the 99-default.link file indicates. NamePolicy=keep kernel database onboard slot path
  16. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    Finally in the home stretch, automatic setting not working, but able to force it to act the right way by creating links for each interface, and letting systemd handle it. # more /etc/systemd/network/10-enp7s0f0-mb0.link [Match] OriginalName=* Path=pci-0000:07:00.0 [Link] Description=MB.LEFT...
  17. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    On a good path, getting closer to where I want to be. Problem 1 (not pmx's fault) * had a boot drive (zfs mirror) fail - missed a step in the cloning, so while pmx was BOOTING off a drive, it was only writing grub changes to the OTHER drive. This made me chase my tail for hours on why changes...
  18. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    No love so far. [ 13.758120] ixgbe 0000:09:00.1 eth12: renamed from eth2 [ 13.780906] ixgbe 0000:09:00.0 eth11: renamed from eth0 [ 13.812475] igb 0000:07:00.1 eth2: renamed from eth3
  19. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    Found this too - removing and rebooting: /lib/systemd/network# more 99-default.link # SPDX-License-Identifier: LGPL-2.1-or-later # # This file is part of systemd. # # systemd is free software; you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public...
  20. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    used to hate ifnames, trying to get over it. Migrating 100% functional hosts, which had "net.ifnames=0" in grub, and 70-persistent-net-rules set up. Removed both things, updated init, and they stubbornly refuse to rename to enp* style names, after multiple reboots. set up explicit LINK files...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!