Search results

  1. Sebastian Schubert

    Interface not picking up traffic without promiscous mode

    Hi folks, i've a very weird behaviour with a cluster i built recently. The nics only pic up traffic after setting them manually to promiscous mode. Anyone ever run into this problem? root@proxmox001:~# lspci -s 82:00.0 -v 82:00.0 Ethernet controller: Mellanox Technologies MT27710 Family...
  2. Sebastian Schubert

    LXC CentOS Guest Interface Configuration Broken for Manual IP Config after Update to Proxmox 7

    Hi there, after updating a host from 6.4 to 7.2 today with about 200 containers, none of the containers was bringing up the network. The Network config is done INSIDE the lxc container and NOT via the Proxmox Config (example: net0...
  3. Sebastian Schubert

    Qdevice and two corosync rings

    Hi there, i'm about to add a qdevice on a cluster with 2 corosync rings. Does the qdevice have to be reachable on both rings, or whats the best way to have as many redundancy as possible for the qdevice?
  4. Sebastian Schubert

    Usage of Pools

    Hi there, i was wondering if there are some thaughts about extending the functionality of "pools" (see https://pve.proxmox.com/pve-docs/chapter-pveum.html#pveum_pools) to include Features like "maximum CPU/RAM/DISK Assigned to this pool" when a user tries to add a new VM.
  5. Sebastian Schubert

    LRM Hangs when updating while Migration is running

    Hi there, today i updated some of our clusteres and started the "Upgrade" while still evacuating the VMs off the Host... the dpgk configure for pve-ha-lrm tries to restart .. but gets stuck Apr 26 13:02:37 an1-kvm01-bt01-b pve-ha-lrm[2936243]: Task...
  6. Sebastian Schubert

    Set write_cache for ceph disks persistent

    Hi, we changed the write_cache setting for disk based ceph osds by chaging /sys/block/sdXX/queue/write_cache manually. Is there a preffered way to set this in proxmox, or could this even be an enhancement request to set the write cache for ceph disks from pve?
  7. Sebastian Schubert

    Enterprise Repository slow download speed

    Hi there, trying to update my clusters today (and last week) is a bit frustrating, since every node takes quite a long time to download the packages from the enterprise repository - the ceph repo from download.proxmox.com is way faster ... (Every Server has 1gb/s internet connectivity, we are...
  8. Sebastian Schubert

    [SOLVED] Update from 6 to 7 gone wrong - ceph dependency

    Hi there, i tried to update from Version 6 to 7 on a fresh install (Direct install of 7 did not work -> https://forum.proxmox.com/threads/installation-aborted.96431/#post-425292 ) The Packages however are now in state that prevent me from updating to 7: Fetched 151 MB in 2s (71.0 MB/s) W...
  9. Sebastian Schubert

    [SOLVED] Change/remove hookscript from LXC Container

    Hi, is there another way for removing/changing a container hookscript than using vi on the container conf? I'd like to use pct for that or the WebUI (Proxmox 6.4)
  10. Sebastian Schubert

    Naming of Proxmox 7 Repos

    Hi there, the new Proxmox7 Repos are named like this: ProxmoxVEEnterpriseDebianRepository:stable - in Version 6.x the name was ProxmoxVE This makes it laborious for our monitoring (prometheus node exporter with the apt update systemd timer with default settings) to distinguish between Debian...
  11. Sebastian Schubert

    Shared CEPH Storage unknown to some Cluster members

    Hi there, i did a fresh install of Proxmox 6.4 and initialized a CEPH Storage that is "unknown" to 2 of the 4 nodes in the cluster. Any clue why this is happening?
  12. Sebastian Schubert

    Proxmox does not come up propely with a dying disk

    Hi there just ran into an issue with a failing device (ssd decided to die) After rebooting the node, it won't bring up the interfaces, as the "ifupdown2-pre.service" wont succeed .. its basically a "/bin/udevadm settle" that waits till everything is okay. But due to the failing disk device...
  13. Sebastian Schubert

    [SOLVED] Bringing up vlan aware bridge takes ages

    hi there, just set up a cluster with a weird behaviour: it takes +5 Minutes to bring up the bridge after clicking apply configuration /etc/network/interfaces: auto lo iface lo inet loopback iface enp65s0f0 inet manual iface enp65s0f1 inet manual iface enp65s0f2 inet manual iface enp65s0f3...
  14. Sebastian Schubert

    lxcfs segfaulting after upgrade to 6.2 (lxc 4.0)

    Hi there, we just upgraded to the 6.2 release with lxc 4.0 and after running about 250 containers on each node, we now get the following error (previously: https://forum.proxmox.com/threads/lxcfs-br0ke-cgroup-limit.69015/#post-309442) root@lxc-prox4:~# grep -A 5 -B 5 lxcfs /var/log/messages...
  15. Sebastian Schubert

    [SOLVED] PVESIG - what is it used for?

    Hi there, i was wondering what the PVESIG in the iptables rules are for. Is there any sort of "tampering" detection (and mitigation?) or what is it used for?
  16. Sebastian Schubert

    LXCFS br0ke / cgroup limit

    Hi there, we're actually running a four node Cluster with about 250 lxc containers on each node (evenly distributed). Primary Storage for almost all containers (except 4) is on the integrated ceph within proxmox. Kernel Version Linux 5.3.13-1-pve #1 SMP PVE 5.3.13-1 (Thu, 05 Dec 2019...
  17. Sebastian Schubert

    [SOLVED] Parallel startup for LXC Containers possible?

    Hi there, having >200 LXC Containers on a Server takes some time to bring all up on a Node Reboot, especially when they are started in a serial manner. Is there a possibility to work with parallel starts to bring up the containers faster?
  18. Sebastian Schubert

    [SOLVED] Set primary slave for bond

    Hi, is there a best practice to set a specific interface as primary for a bond? @proxmox: Could you add such a "feature" to the ui?
  19. Sebastian Schubert

    [SOLVED] Disk quota exceeded - Failed to create kernel keyring

    Hi, wir haben auf unseren Systemen "immer mehr" lxc container, und bei inzwischen mehr als 200 gab es auf einmal folgenden Fehler: lxc-execute: 994: utils.c: lxc_setup_keyring: 1898 Disk quota exceeded - Failed to create kernel keyring Der Fehler lässt sich einfach beheben, wenn man sieht das...
  20. Sebastian Schubert

    [SOLVED] CEPH OSD Nearfull

    Hi, wir betreiben einen 4 Node Cluster mit CEPH als Storage (alles PVE Managed). Heute morgen ist eine OSD auf Nearfull gesprungen, und der Pool dazu scheinbar auch. Was ich nicht ganz verstehe: 67% used vom Raw Storage aber 85 vom Pool? Liegt das evtl. am Verschnitt durch "size=3" ...