Search results

  1. C

    [SOLVED] Blacklist doesn't work

    Ah, so email = recipient, Adddress = blacklisted sender. So, just not much helpful UI.
  2. C

    [SOLVED] Blacklist doesn't work

    Hi, we added email to Administration->User Blacklist: But email is still delivered: Sep 29 12:52:34 pmg-01 pmg-smtp-filter[1815039]: 401A7633578ECED94F: SA score=1/5 time=5.279 bayes=0.00 autolearn=no autolearn_force=no...
  3. C

    connecting Ceph ProxmoxVE Nodes

    Hm, 100GBit/s switch...show me please switch with almost 1Tbps ports.
  4. C

    Proxmox update from 7.2-4 GRUB update failure

    FYI - this problem is not only PVE side, because last night maintenance it hit (semirandomly) about 5% VMs.
  5. C

    AMD Opteron 6276 - Slow performance compared to bare metal or ESXi

    Opteron 6276 is very old and slow performant. Use better HW. Problem solved.
  6. C

    Storage configuration setup : Nvme+HDD vs. Nvme

    Hypervisor with 1 disk is nonsense if you don't have multiple such hypervisors with network storage. Solution: 2x ssd raid (pve+vm/cs) + hdd raid as data/backup storage only or 2x ssd raid (pve) + 2x ssd raid (vm/ct) etc.
  7. C

    Zabbix template

    "ceph-mgr Zabbix module" via https://docs.ceph.com/en/latest/mgr/zabbix/
  8. C

    New all flash Proxmox Ceph Installation

    Drop 10Gbps rj45 and use sfp+ only (10,25Gbps variant, dac or fibre). Power consumption and latency are better. You can use extra 2x nvme disks per node as other data pool.
  9. C

    Proxmox and Netdata

    Netdata don't care about host role. You need tweak its trigger levels if defaults are out of the way.
  10. C

    [SOLVED] WARNING: CPU: 0 PID: ... [openvswitch]

    Same on hp dl380p g8 intel cpu. Probably solved in kernel patch mentioned in https://github.com/openshift/okd/issues/1189.
  11. C

    Slow restore backup to DRBD9 storage.

    Proxmox don't support drbd. Ask Linbit.
  12. C

    Add server as non-voting member of cluster?

    Only disk migration? Use shared/network storage or something as zfs sync.
  13. C

    Proxmox HA cluster with ceph - need help with network topology and storage

    ad 1] a] 1 corosync link on ceph bond + 1 corosync link in mesh b] 1 corosync link on 1gbps link on dedicated switch + 1 corosync link in mesh c] 1 corosync link on 10gbps + 1 corosync link on 10gbps - all without mesh etc... tldr: split corosync link so they aren't dependent on one logical...
  14. C

    3 site Ceph cluster

    Check https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network and other network related things https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
  15. C

    Replacing a node in a cluster - LVM to ZFS

    You can use same name. Search PVE docs for removing/adding node. Non-uefi boot way: https://aaronlauterer.com/blog/2021/move-grub-and-boot-to-other-disk/ (or search google).
  16. C

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    Do you really understand what are you benchmarking? You can't calculate ceph iops based on ssd iops. Read this https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ .
  17. C

    Cluster problems after removal and re-adding the same node

    Cluster communication is based on ssh, so fix ssh (key, authorized_keys) on every node (/etc/root, /etc/pve directories).