Search results

  1. N

    Is the Ceph replicate or duplicate files on 3 nodes

    You can work for simple pve/ceph cluster with 2.5gbit networking, i made a few, but yes, 10g is the minimum if you are working in a >5 node cluster.
  2. N

    Linux Mint 21.3 on Proxmox 8.2.2

    Maybe change the video display on vm?
  3. N

    Proxmox on aarch64 (arm64)

    Thats great i go to first reseller, and the machine is even missing from the page. So 0 points for Ampere in this case.
  4. N

    Proxmox on aarch64 (arm64)

    Ampere is great,but if you cannot buy those servers at all markets for us this story is pretty much useless, and for now we cannot buy it in Europe in meaningfull way or price.
  5. N

    Problems with GPU Passthrough since 8.2

    I also had the same error with 8700GE gpu passthrough, didn't find the solution.
  6. N

    What is the impact of PVE hard disk wearout 100%?

    Kingston or crucial are really a drive for gaming and office, not for enterprise.
  7. N

    What is the impact of PVE hard disk wearout 100%?

    Wearout is what manufacterer set it to be. So usually just a warning that your drive can die any minute, not that it will.
  8. N

    Cluster replication

    There is also an option with ceph replication,but it is not an easy one.
  9. N

    Ceph Apply/Commit latency too high

    Or if you already both these samsungs,get smaller enterprise drives and store db/wal on them.
  10. N

    Most official/best-practice way to reduce the amount logging?

    There is this topic: https://forum.proxmox.com/threads/slim-down-promxmox-disable-corosync-pve-ha-services.55938/
  11. N

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    I've updated in the last few days more than 20 nodes. Except for two Thinkserver sr530 and systemd idiotic interface change(eno1 to eno1np2???)
  12. N

    Ceph goes read-only when only 1 of 3 nodes goes down

    This is expected with your configuration: osd_pool_default_min_size = 2 osd_pool_default_size = 2
  13. N

    unable to create vm on ZFS over iSCSI storage

    No,if you are creating zfs by default it is create with /poolname
  14. N

    [SOLVED] Issues with Ceph on new installs

    Yeah,the disks are okay then. Only thing i can think of,is that there is some problem with ceph network and bonding or something like that
  15. N

    Why my CEPH is so slow?

    Yes,there are some NAS ssd's with PLP, if i'm not mistaken old seagate ones, but usually look for PLP drives.
  16. N

    Why my CEPH is so slow?

    Enterprise SSDs, so Intel,Micron,Samsung,Kioxia(ex Toshiba), so the ones that you know are suitable for enterprise, and in short won't fry fast.