Search results

  1. C

    Installation on R510

    Fix /etc/network/interfaces for your needs.
  2. C

    pveperf fsync performance slower with raid10 than raid1?

    2x 3.84T in zfs r1 = 1TB? what? HD SIZE: 1026.72 GB (raid1-ssd-pool) 4x 3.84T in zfs r10 = 820GB? wtf? HD SIZE: 820.30 GB (raid10-ssd-pool)
  3. C

    [SOLVED] PVE 7.1.8 - notes formatting

    Tab notes has broken formatting. I restored VM from PVE6.4 to 7.1 with such notes: In edit panel are those lines line by line. root IP vg0 - root 8G, swap 2G v20210914 In view panel are those lines all on one line. root IP vg0 - root 8G, swap 2G v20210914 Clearing notes to empy->save->reenter...
  4. C

    6.4 to 7.0 didn't work

    Uncomment: # deb http://ftp.us.debian.org/debian bullseye main contrib # deb http://ftp.us.debian.org/debian bullseye-updates main contrib
  5. C

    Monitors won't start after upgrading.

    So you upgraded one node to PVE7 and upgraded ceph to Octopus too. There's the problem. Before PVE team reply, my possible theoretical solutions : 1] downgrade ceph on the PVE7 node or 2] stop VMs, backup VMs, upgrade rest of the cluster. No warranty from me for any point written above.
  6. C

    Cluster migration NFS

    Easy way - just disable nfs storage on old cluster.
  7. C

    HA or migration of VMs that are turned off on a node that is shut down or rebooted

    https://pve.proxmox.com/wiki/High_Availability#ha_manager_start_failure_policy -> Shutdown policy
  8. C

    Proxmox with Ceph - Disk crashed rate is too high

    P440ar ? It's not real HBA controller, maybe problem is there...
  9. C

    proxmox management interface matters?

    Create Datacenter -> Storage item with that 10G subnet and select backup option.
  10. C

    Integrating PMG and Setting up Certificates & DNS Records

    DMARC etc need to point to server, which is sending the mail, so mailservers. There is no cost to having PMG in those records too anyway. For certificates, the right way is such that works in long run.
  11. C

    Extremely SLOW Ceph Storage from over 60% usage ???

    You cant't evade not using swap when swap is present. You can set swappiness parameter, remove swap, raise RAM, or more debug the problem.
  12. C

    MTU-size, CEPH and public network

    You can do tests. From my point of view mixing 1500/9k mtu on same interface is calling for problems. I tried something as this before even ceph was in pve and it was mess. Network latency will have higher performance impact than 9k mtu.
  13. C

    [SOLVED] One by one upgrade from v6.4 to v7.

    Yes, it's the other way to upgrade (aka, more complicated).
  14. C

    Building a separate ceph storage cluster

    For OP: 4. Rook or any configuration management solutions Anyway, we are calculation external Ceph storage for our PVE too. PVE Staff, are there some requirements for external clusters? For example, version difference etc? PVE documentation handles hyperconverged/updates mainly.
  15. C

    Do cluster with many VMs

    Read documentation. Search forum. Analyze your requirements. Test your setup. etc etc Do your job or pay some skilled engineer.
  16. C

    Is it possible for HA to simply monitor a network link ?

    Not from PVE side. You need implement something as STONITH (locally or remotely), for example shutdown the affected node.
  17. C

    [SOLVED] Recommendation small Ceph setup

    If you aren't skilled with ceph, the better way will be 1 osd per ssd.