Search results

  1. N

    Proper Linux / Windows client segregation for licensing

    With licensing it's hard. Either have every node fully licensed,or split into smaller clusters,even one-node clusters,set up replication(depending on RTO/RPO) and work with that.
  2. N

    How to execute custom shell commands on Proxmox host via API?

    I guess this could pose a security issue if allowed?
  3. N

    Ceph does not recover on second node failure after 10 minutes

    Also,is this a problem with too much pg per osd?
  4. N

    Ceph does not recover on second node failure after 10 minutes

    [WRN] MDS_INSUFFICIENT_STANDBY: insufficient standby MDS daemons available have 0; want 1 more Why is this if mds is 1/1?
  5. N

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    The beauty of ceph it can run on anything. You need extra network card(even 2.5g works in small environment),and one disk per node and off you go! Yes,performance isnt maximum you can get, but it works, and you can test it, failover,HA and everything. I also have one customer with 3 node 2.5gb...
  6. N

    How to create Ceph S3 in Proxmox GUI?

    what do your vms have that needs object instead of lets say file storage?
  7. N

    Ceph - Which is faster/preferred?

    I have these disks(SSDPE2NV153T8) in one of the cluster i'm maitaining. They are okay. But i would always stick more disks than bigger disks.
  8. N

    Low disc performance with CEPH pool storage

    Are you using host cpu type or x86_64_v3?
  9. N

    Mellanox ConnectX-6 (100GbE) Performance Issue – Only Reaching ~19Gbps Between Nodes

    Thats okay for single threaded iperf3, around 20-25gbp/s.
  10. N

    Low disc performance with CEPH pool storage

    Disc model? LACP is usually 2x25,hardly you will fill the full double throughput.
  11. N

    RAID5 with LVM

    That is why you have RTO/RPO , if the boss says you have few hours, then you are okay with everything.
  12. N

    Ceph and failed drives

    Usually up to whole machine you can get away with.
  13. N

    How to obtain old Ubuntu container templates?

    For compatibility reasons it is always best to use VMs.
  14. N

    Fiberchannel Shared Storage Support

    But HW raid is faster,as always.
  15. N

    10G write speed in VM | Ceph?

    Yes if you use nvme disks, something like 4-6 per node.
  16. N

    Tuxis launches free Proxmox Backup Server BETA service

    Why does source PBS needs to be Internet-accessible in this case?