Search results

  1. N

    Proxmox Backup Server 4.0 BETA released!

    s3-endpoint: Wasabi access-key endpoint proxmox.ca-central-1.wasabisys.com secret-key
  2. N

    Proxmox Backup Server 4.0 BETA released!

    I'm trying to add wasabi to test but for now error 400. Maybe you should add test button on S3 endpoints?
  3. N

    CEPH: small cluster with multiple OSDs per one NVMe drive

    Depends on the network, in colo once a week :)
  4. N

    [TUTORIAL] Request for community help - S3 on Ceph cluster

    S3 scales really well in ceph, a lot better than minio, but it is really hard to master.
  5. N

    Proxmox Logs to ElasticSearch ... any advice?

    I'm forwarding logs directly to Eventlog, instead of elastic or filebeat or logstash. But if you want to enrich the data,or split onto some more fields, than i don't have anything. What is the primary goal, what kind of messages do you want/need?
  6. N

    Proxmox Logs to ElasticSearch ... any advice?

    Install filebeat and read from /var/log, same as any debian. But you didn't say what else do you have,like ceph,etc. Maybe you need more collectors inside .yml file.
  7. N

    CEPH: small cluster with multiple OSDs per one NVMe drive

    This is dependant on nvme ,but usually with nvme's up to 14tb, i didnt see the performance difference.
  8. N

    Hardware question: Hardware Raid for OS (Proxmox ve)?

    I usually use sas drives for Proxmox OS, yes in hardware raid two of them. NEver had problems. With newer let's say supermicro servers, i go with 256gb nvme/m.2/whatever and zfs for os.
  9. N

    Feature request : Expiration dates for snapshot

    The snapshots are not just for that,maybe they keep some specific version of software you come back often. It is a rather specific thing,but nothing prevents you adding a bugzilla entry.
  10. N

    Proper Linux / Windows client segregation for licensing

    With licensing it's hard. Either have every node fully licensed,or split into smaller clusters,even one-node clusters,set up replication(depending on RTO/RPO) and work with that.
  11. N

    How to execute custom shell commands on Proxmox host via API?

    I guess this could pose a security issue if allowed?
  12. N

    Ceph does not recover on second node failure after 10 minutes

    Also,is this a problem with too much pg per osd?
  13. N

    Ceph does not recover on second node failure after 10 minutes

    [WRN] MDS_INSUFFICIENT_STANDBY: insufficient standby MDS daemons available have 0; want 1 more Why is this if mds is 1/1?
  14. N

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    The beauty of ceph it can run on anything. You need extra network card(even 2.5g works in small environment),and one disk per node and off you go! Yes,performance isnt maximum you can get, but it works, and you can test it, failover,HA and everything. I also have one customer with 3 node 2.5gb...
  15. N

    How to create Ceph S3 in Proxmox GUI?

    what do your vms have that needs object instead of lets say file storage?
  16. N

    Ceph - Which is faster/preferred?

    I have these disks(SSDPE2NV153T8) in one of the cluster i'm maitaining. They are okay. But i would always stick more disks than bigger disks.
  17. N

    Low disc performance with CEPH pool storage

    Are you using host cpu type or x86_64_v3?
  18. N

    Mellanox ConnectX-6 (100GbE) Performance Issue – Only Reaching ~19Gbps Between Nodes

    Thats okay for single threaded iperf3, around 20-25gbp/s.
  19. N

    Low disc performance with CEPH pool storage

    Disc model? LACP is usually 2x25,hardly you will fill the full double throughput.