Search results

  1. C

    [SOLVED] PVE7.3 network doesn't start after boot

    Hi, server has ens1f* interfaces down after PVE reboot. Both interfaces are bonded and used via openvswitch-switch. The first half shows ens1f* down and one VM starting (tap10110). The second half shows both links still down and VM failed to start, so tap iface is removed. All bond/OVS...
  2. C

    Make ceph resilient to multi node failure

    osd_pool_default_min_size = 2 osd_pool_default_size = 3 Calculate. You have 4 nodes with osds and for example 1 pool. Host replication (i think it's default). You have 3 copies and need 2 copies to have cluster working. What will happen, when you shutdown specific nodes holding those 2...
  3. C

    How to disable TLS 1.0 & TLS 1.1

    Isn't tls1/1.1 disabled anyway? Because PVE7.2 is based on Deb11 and it's openssl.conf has set MinProtocol = TLSv1.2 Hmm, it looks, postfix doesn't honor openssl configuration. SOMEHOST:~$ openssl s_client -connect PMG:25 -tls1 CONNECTED(00000003) 140408873498256:error:1408F10B:SSL...
  4. C

    NFS Share make me crazy

    What? Any link? Doesn't pvesm depends on v3?
  5. C

    [SOLVED] Blacklist doesn't work

    Ah, so email = recipient, Adddress = blacklisted sender. So, just not much helpful UI.
  6. C

    [SOLVED] Blacklist doesn't work

    Hi, we added email to Administration->User Blacklist: But email is still delivered: Sep 29 12:52:34 pmg-01 pmg-smtp-filter[1815039]: 401A7633578ECED94F: SA score=1/5 time=5.279 bayes=0.00 autolearn=no autolearn_force=no...
  7. C

    connecting Ceph ProxmoxVE Nodes

    Hm, 100GBit/s switch...show me please switch with almost 1Tbps ports.
  8. C

    Proxmox update from 7.2-4 GRUB update failure

    FYI - this problem is not only PVE side, because last night maintenance it hit (semirandomly) about 5% VMs.
  9. C

    AMD Opteron 6276 - Slow performance compared to bare metal or ESXi

    Opteron 6276 is very old and slow performant. Use better HW. Problem solved.
  10. C

    Storage configuration setup : Nvme+HDD vs. Nvme

    Hypervisor with 1 disk is nonsense if you don't have multiple such hypervisors with network storage. Solution: 2x ssd raid (pve+vm/cs) + hdd raid as data/backup storage only or 2x ssd raid (pve) + 2x ssd raid (vm/ct) etc.
  11. C

    Zabbix template

    "ceph-mgr Zabbix module" via https://docs.ceph.com/en/latest/mgr/zabbix/
  12. C

    New all flash Proxmox Ceph Installation

    Drop 10Gbps rj45 and use sfp+ only (10,25Gbps variant, dac or fibre). Power consumption and latency are better. You can use extra 2x nvme disks per node as other data pool.
  13. C

    Proxmox and Netdata

    Netdata don't care about host role. You need tweak its trigger levels if defaults are out of the way.
  14. C

    [SOLVED] WARNING: CPU: 0 PID: ... [openvswitch]

    Same on hp dl380p g8 intel cpu. Probably solved in kernel patch mentioned in https://github.com/openshift/okd/issues/1189.
  15. C

    Slow restore backup to DRBD9 storage.

    Proxmox don't support drbd. Ask Linbit.
  16. C

    Add server as non-voting member of cluster?

    Only disk migration? Use shared/network storage or something as zfs sync.
  17. C

    Proxmox HA cluster with ceph - need help with network topology and storage

    ad 1] a] 1 corosync link on ceph bond + 1 corosync link in mesh b] 1 corosync link on 1gbps link on dedicated switch + 1 corosync link in mesh c] 1 corosync link on 10gbps + 1 corosync link on 10gbps - all without mesh etc... tldr: split corosync link so they aren't dependent on one logical...