Recent content by Teapot

  1. T

    How to assign an H200 MIG instance to an LXC container?

    Hi, Has anyone successfully assigned an NVIDIA H200 MIG device to an LXC container on Proxmox? If so, how did you do it?
  2. T

    Prune Jobs Not Working

    I can try something with cron. Is it safe delete from; /mnt/datastore/<DISKNAME>/ns/<NSNAME>/vm/<VMID>
  3. T

    Prune Jobs Not Working

    Thank you, that's different than I thought. But how can I clean the backups of unused +15 days VMs?
  4. T

    Prune Jobs Not Working

    Hello, I created prune jobs and run manual. They are not removing old backups. root@pbs:~# proxmox-backup-manager versions proxmox-backup-server 2.4.2-2 running version: 2.4.2 I set prune to 15 days; Older than 15 days backups;
  5. T

    Ceph Slow Performance On All Flash NVME

    This is great ! I will try on my test cluster. Thanks.
  6. T

    Ceph Slow Performance On All Flash NVME

    Solved. In server all NVME disks same but some of them slower. (it wasn't like that on the first day, they slowed down afterwards) I changed these disk and problem solved. But another problem, When I try to restart OSD in Proxmox CEPH GUI OSD not starting. What could this be due to?
  7. T

    Ceph Slow Performance On All Flash NVME

    Yes , I tested with IPERF. With multiple thread I can get 100-110Gbps. When I was the restart OSD from Proxmox CEPH GUI, osd won't restart after stopping. I have to completely delete disk delete and add.
  8. T

    CEPH Reduced Data Avability

    Totally can I lose 6 SSD driver of different nodes without any data corruption is it ok? but I still don't understand why I'm having problems with 2 disks. thanks.
  9. T

    CEPH Reduced Data Avability

    Hello, We have 6 node cluster with every node 6 SSD disk. When I delete one OSD from node1 and node3 , it says "reduced data avability" and write operations is stopping. When I delete disk and re-added its fixed problem. Is it normal ? How many disk can be loose same time ? Cluster size : 3...
  10. T

    Ceph Slow Performance On All Flash NVME

    Ceph.conf [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 10.254.254.10/24 debug_asok = 0/0 debug_auth = 0/0 debug_bdev = 0/0 debug_bluefs = 0/0...
  11. T

    Ceph Slow Performance On All Flash NVME

    Hello, We have 6 servers and 36 x 3.84T NVME OSD. (All NVME's Enterprise Gen4 PCIE NVME) Sometimes when I tried to VM disk import its importing with 60MB/sn but sometimes its importing with 1GB/sn. VM Speedtest: https://prnt.sc/miiSD_s_F7xC I checked all nodes CPU, RAM status its ok. NVME...
  12. T

    To many logs of root login -> Its me?

    Thanks for reply. In 30 second , ıts over 20+. Here is the screenshot. Its only 20-30 seconds log. https://prnt.sc/Vozp9Lz3pY11
  13. T

    To many logs of root login -> Its me?

    same problem. How do you solve ?
  14. T

    12 Node CEPH How Many Nodes Can Be Fail ?

    Hello, I have 12 Node CEPH Cluster. Every node with 2 x SATA BOOT disk. For CEPH every node contain 3 x NVME 3.84TB (3 OSD) Im using 3 replica and min_size=2. How many node can be failed ? And a second question when I simulate to fail node (by shutting down any node) in Proxmox CEPH Max...
  15. T

    Maximum Usable Memory

    Hi, free -m total used free shared buff/cache available Mem: 185348 129559 53395 781 2394 53706 Swap: 0 0 0 vmstat procs -----------memory---------- ---swap-- -----io---- -system--...