Search results

  1. D

    PVE8: breaks networking during upgrade leaving node unusable

    Node #1 back online - going to work on node #2 next. Steps: * attach drive to other linux machine (or boot off recovery CD/etc) * mkdir /tmp/1 * mount /dev/mapper/pve-root /tmp/1 * cd /tmp/1 * mount -t proc /proc proc/ * mount --rbind /sys sys/ * mount --rbind /dev dev/ * chroot /tmp/1 * dpkg...
  2. D

    PVE8: breaks networking during upgrade leaving node unusable

    >https://pve.proxmox.com/wiki/Upgrade_from_7_to_8 said: >recommended to have access over a host independent channel like iKVM/IPMI or physical access. >f only SSH is available we recommend testing the upgrade on an identical, but non-production machine first. Far too mild. Should say: "If only...
  3. D

    PVE8: breaks networking during upgrade leaving node unusable

    /etc/network/interfaces auto lo iface lo inet loopback auto ens1 iface ens1 inet manual mtu 1500 ovs_mtu 1500 auto enp3s0 iface enp3s0 inet manual mtu 1500 ovs_mtu 1500 auto bond0 iface bond0 inet manual ovs_bridge vmbr0 ovs_type OVSBond...
  4. D

    PVE8: breaks networking during upgrade leaving node unusable

    Upgraded my tiny lab cluster today - 3 headless nodes of miniPC, (2) with C2930, and (1) with N3160. Identical drives, ram. dual NIC, LACP, managed by open-vswitch, with multiple vlan's (including management) across the bundle. all had green pve7to8 reports. All dumped networking at the same...
  5. D

    [SOLVED] how to troubleshoot dropped packets

    Did not fix the problem. ``` net_packets.vmbr0CHART inbound packets dropped ratio = 0.17% the ratio of inbound dropped packets vs the total number of received packets of the network interface, during the last 10 minutesALARM vmbr0FAMILY
  6. D

    [SOLVED] how to troubleshoot dropped packets

    Having same problem - testing fix - will report back in 48h.
  7. D

    Proxmox Remote Vzdump

    Great suggestion, thank you for sharing - just used this method to deal with a couple of servers in an old cluster I wanted to decommission.
  8. D

    cephFS not mounting till all nodes are up (7.3.6)

    The reverse of that. I'm asking if having all (5) monitors listed in the mount statement is causing the problem when (1) is missing.
  9. D

    cephFS not mounting till all nodes are up (7.3.6)

    Will look later today. Curious, if it's the actual mount statement that's the problem. For example, once it's mounted, it lists all (5) hosts. Could it be any single host missing prevents the mount? 198.18.53.101,198.18.53.102,198.18.53.103,198.18.53.104,198.18.53.105:/ 50026520576...
  10. D

    cephFS not mounting till all nodes are up (7.3.6)

    I'll be rebooting the cluster again today (it is a lab after all) but here's the "current" status with all (5) nodes up and everything happy. ~# ceph fs status cephfs - 7 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active mx4 Reqs: 0 /s 83.2k 48.0k...
  11. D

    qemu-kvm-extras / qemu-system-arm / raspberry pi under ProxMox2.x

    5 years later -- there's a massive list of "Types" that pmx/qemu/kvm can support - are we any closer to arm64 support in the GUI?
  12. D

    cephFS not mounting till all nodes are up (7.3.6)

    Each node has both a MON and an MDS - so in the above example we are 4/5 MON and 4/5 MDS (with only 2 MDS needed) ... hence the puzzle ...
  13. D

    cephFS not mounting till all nodes are up (7.3.6)

    5 node deployment in lab, noticed something odd. Cephfs fails to mount on any node nodes until *ALL* nodes are up. IE 4 of 5 machines up, cephfs still fails. Given the pool config of cephfs_data and cephfs_metadata (both 3/2 replicated) I don't understand why this would be the case. In theory...
  14. D

    New tool: pmmaint

    Nice work - seems like something that would be great integrated into the PMX GUI - have an "evacuate node" right-click-menu option. Spacing needs a little help for larger hosts, this is my lab: |Memory (GB) hostname | total free used | CPU pmx1...
  15. D

    Multiple cephfs ?!

    .... so i tried it again today, and **magic** -- it created the mount matching the name under /mnt/pve - and mounted on all clients. Thanks pmx team - well done.
  16. D

    Enable MTU 9000 Jumbo Frames

    agreed. However, pre-up can be useful if you want to make sure the individual members of the bond are brought up before the bond is, for example.
  17. D

    Proxmox scalability - max clusters in a datacenter

    Honestly, as much as I love/use proxmox, for the scale you're talking about, openstack might be a better fit - lots of multi-site tools available for that env, today.. just get your checkbook out..
  18. D

    Multiple cephfs ?!

    So this feature appears to be functional, (or mostly so) in 7.x - you can create a secondary cephfs, it creates the data/meta pools, finds open mds servers, and starts.. only it's not mounted any where? I would have assume it was created/mounted under /mnt/pve - but no dice. I'm guessing doing...
  19. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    The root problem here appears to have been that I *ever* overrode netnames, because ever after, it wants to use existing or db, as the 99-default.link file indicates. NamePolicy=keep kernel database onboard slot path
  20. D

    net.ifnames=1 unsupported in 5.15.83-1-pve?

    Finally in the home stretch, automatic setting not working, but able to force it to act the right way by creating links for each interface, and letting systemd handle it. # more /etc/systemd/network/10-enp7s0f0-mb0.link [Match] OriginalName=* Path=pci-0000:07:00.0 [Link] Description=MB.LEFT...