Recent content by czechsys

  1. C

    Non cluster migration lvm disk slowdown data transfer to zero

    It looks as https://forum.proxmox.com/threads/test-migration-stuck.168776/
  2. C

    How to change to 10Gbps NIC Card option for better migration performance

    Really HP G6 (super old server) and SSD? DId you even tested disk performance on the host?
  3. C

    Can I install molly-guard on a pve node?

    Define "accidental reboots". User Via ssh? Install it. User via gui? Don't know.
  4. C

    VLANs working, but not the way they should?

    It's all about where is vlan tagged and when it's native vlan vs vlan on port/bridge. If esxi vswitch required vlan id for every vnic, linux standard/vlan-aware/openvswitch allow all variants.
  5. C

    Help with cluster network optimization

    PVE/ssh by default listen on all interfaces, so it's possible to connect via mgmt, corosync, storage etc IPs. If you don't need very high performance (100 Gbps networks), i will use 9000 only on limited scope, for example, dedicated vmbr/vlan for the storage.
  6. C

    3 servers, 3 cables, 1 ceph network?

    Depends on nic utilization, but we use the switch way for our 3 nodes cluster: 1x lacp (2 ports) with vlans for management (= ceph public), for vm, for corosync, etc 2x lacp (2 ports) with vlans for corosync, for ceph storage Mesh is for small number of nodes or when the switch is too costly...
  7. C

    Second vmbr and NIC defeats the first in terms of WebUI, SSH

    I am not using such config variant, so i can theoretize it looks ok. But i am using vlans everywhere and never assign ip to bridge, but using subinterface every time. Anyway, PVE can access multiple networks without fw/router. For nfs access you even don't need to have bridge - if physical card...
  8. C

    bond health check

    options: script it use mlag switches multipath etc etc
  9. C

    Outgoing mail from internal servers rejected with “Relay access denied” in PMG

    PMG is mainly for mailserver to mailserver communication. If you are trying sending mail from non mailservers, send those mails to exchange first. Or https://www.postfix.org/SMTPD_ACCESS_README.html#relay and test.
  10. C

    [SOLVED] Hardening SSH

    PVE require root via ssh for cluster function.
  11. C

    Hardware advice (or "questionable Proxmox performance on nice box")

    Disabled power saving states on the Dell? Firmware updated? SSDs are enterprise or desktop versions? I have feeling, you have disk problem, missing virtio drivers in Window VMs etc, but nothing concrete, no VM config/pve versions written. Maybe install netdata and check the problematic time window.
  12. C

    Test migration stuck

    MTU is 1500 for all servers, switches has 9216. Remotes the same location, the same switch and vlan. Because internal CA, fingeprints (FP): Configured for REMOTE1:8006: from pveproxy-ssl.pem FP - BUT - it automatically detects intermediate certificate FP and not client certificate FP Not...
  13. C

    Test migration stuck

    Nothing spotted yet, because remotes are in production and there is nothing non-standard even on netdata (i can miss something of course). Today 4x test moving real vm 10+200 GB, migration fails for 2nd disk around 17-19/16-17 GB position. Moving via ssh/dd succeeded on the 1st try. Only...
  14. C

    Ceph Cluster Broken

    Fix your time synchronization as the first step. clock skew detected on mon.proxmox03, mon.proxmox01
  15. C

    Test migration stuck

    Tests 10 GB new VM (not recreated between this test round, the same VMID): 1. slowed down 598585344 bytes (8.6 GB, 8.0 GiB) copied, 39 s, 220 MB/s 8803647488 bytes (8.8 GB, 8.2 GiB) copied, 40 s, 220 MB/s 8878555136 bytes (8.9 GB, 8.3 GiB) copied, 41 s, 214 MB/s 8944156672 bytes (8.9 GB, 8.3...