Search results

  1. D

    PVE8: LXC containers on lvm storage fail to start after upgrade with acl=0

    Container fails to start after upgrade, not seeing anything obvious. lxc-start 61001 20230704010253.978 DEBUG conf - ../src/lxc/conf.c:run_buffer:311 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook 61001 lxc pre-start produced output: mount: /var/lib/lxc/.pve-staged-mounts/rootfs...
  2. D

    PVE8: breaks networking during upgrade leaving node unusable

    All of my production nodes are running open-vswitch in a similar fashion to my lab nodes. This means I **must** have kvm/console for every node, which is a change to previous upgrade processes.
  3. D

    PVE8: breaks networking during upgrade leaving node unusable

    Node #1 back online - going to work on node #2 next. Steps: * attach drive to other linux machine (or boot off recovery CD/etc) * mkdir /tmp/1 * mount /dev/mapper/pve-root /tmp/1 * cd /tmp/1 * mount -t proc /proc proc/ * mount --rbind /sys sys/ * mount --rbind /dev dev/ * chroot /tmp/1 * dpkg...
  4. D

    PVE8: breaks networking during upgrade leaving node unusable

    >https://pve.proxmox.com/wiki/Upgrade_from_7_to_8 said: >recommended to have access over a host independent channel like iKVM/IPMI or physical access. >f only SSH is available we recommend testing the upgrade on an identical, but non-production machine first. Far too mild. Should say: "If only...
  5. D

    PVE8: breaks networking during upgrade leaving node unusable

    /etc/network/interfaces auto lo iface lo inet loopback auto ens1 iface ens1 inet manual mtu 1500 ovs_mtu 1500 auto enp3s0 iface enp3s0 inet manual mtu 1500 ovs_mtu 1500 auto bond0 iface bond0 inet manual ovs_bridge vmbr0 ovs_type OVSBond...
  6. D

    PVE8: breaks networking during upgrade leaving node unusable

    Upgraded my tiny lab cluster today - 3 headless nodes of miniPC, (2) with C2930, and (1) with N3160. Identical drives, ram. dual NIC, LACP, managed by open-vswitch, with multiple vlan's (including management) across the bundle. all had green pve7to8 reports. All dumped networking at the same...
  7. D

    [SOLVED] how to troubleshoot dropped packets

    Did not fix the problem. ``` net_packets.vmbr0CHART inbound packets dropped ratio = 0.17% the ratio of inbound dropped packets vs the total number of received packets of the network interface, during the last 10 minutesALARM vmbr0FAMILY
  8. D

    [SOLVED] how to troubleshoot dropped packets

    Having same problem - testing fix - will report back in 48h.
  9. D

    Proxmox Remote Vzdump

    Great suggestion, thank you for sharing - just used this method to deal with a couple of servers in an old cluster I wanted to decommission.
  10. D

    cephFS not mounting till all nodes are up (7.3.6)

    The reverse of that. I'm asking if having all (5) monitors listed in the mount statement is causing the problem when (1) is missing.
  11. D

    cephFS not mounting till all nodes are up (7.3.6)

    Will look later today. Curious, if it's the actual mount statement that's the problem. For example, once it's mounted, it lists all (5) hosts. Could it be any single host missing prevents the mount? 198.18.53.101,198.18.53.102,198.18.53.103,198.18.53.104,198.18.53.105:/ 50026520576...
  12. D

    cephFS not mounting till all nodes are up (7.3.6)

    I'll be rebooting the cluster again today (it is a lab after all) but here's the "current" status with all (5) nodes up and everything happy. ~# ceph fs status cephfs - 7 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active mx4 Reqs: 0 /s 83.2k 48.0k...
  13. D

    qemu-kvm-extras / qemu-system-arm / raspberry pi under ProxMox2.x

    5 years later -- there's a massive list of "Types" that pmx/qemu/kvm can support - are we any closer to arm64 support in the GUI?
  14. D

    cephFS not mounting till all nodes are up (7.3.6)

    Each node has both a MON and an MDS - so in the above example we are 4/5 MON and 4/5 MDS (with only 2 MDS needed) ... hence the puzzle ...
  15. D

    cephFS not mounting till all nodes are up (7.3.6)

    5 node deployment in lab, noticed something odd. Cephfs fails to mount on any node nodes until *ALL* nodes are up. IE 4 of 5 machines up, cephfs still fails. Given the pool config of cephfs_data and cephfs_metadata (both 3/2 replicated) I don't understand why this would be the case. In theory...
  16. D

    New tool: pmmaint

    Nice work - seems like something that would be great integrated into the PMX GUI - have an "evacuate node" right-click-menu option. Spacing needs a little help for larger hosts, this is my lab: |Memory (GB) hostname | total free used | CPU pmx1...
  17. D

    Multiple cephfs ?!

    .... so i tried it again today, and **magic** -- it created the mount matching the name under /mnt/pve - and mounted on all clients. Thanks pmx team - well done.
  18. D

    Enable MTU 9000 Jumbo Frames

    agreed. However, pre-up can be useful if you want to make sure the individual members of the bond are brought up before the bond is, for example.
  19. D

    Proxmox scalability - max clusters in a datacenter

    Honestly, as much as I love/use proxmox, for the scale you're talking about, openstack might be a better fit - lots of multi-site tools available for that env, today.. just get your checkbook out..