Recent content by adriano_da_silva

  1. A

    Proxmox VE Ceph Benchmark 2023/12 - Fast SSDs and network speeds in a Proxmox VE Ceph Reef cluster

    "Detail: I plan to use 6 physical ports of each switch for an LAG that will provide data communication between the switches to form the 'vPC' stack." -- At this point, I mean that I will use 6 ports for stacking (vPC). I don't know exactly how to calculate how many ports would be needed for...
  2. A

    Proxmox VE Ceph Benchmark 2023/12 - Fast SSDs and network speeds in a Proxmox VE Ceph Reef cluster

    I am setting up a new cluster with Ceph and plan to use two Cisco Nexus 3132q-x switches in a configuration similar to switch stacking, but in the case of Cisco Nexus, it is called "vPC". Each switch has 32 physical QSFP ports of 40Gbps that can be configured as Breakout, allowing each port to...
  3. A

    Ceph: Ports oversubscribed or cut-through?

    I am setting up a new cluster with Ceph and plan to use two Cisco Nexus 3132q-x switches in a configuration similar to switch stacking, but in the case of Cisco Nexus, it is called "vPC". Each switch has 32 physical QSFP ports of 40Gbps that can be configured as Breakout, allowing each port to...
  4. A

    Bcache in NVMe for 4K and fsync. Are IOPS limited?

    I've been running like this for more than a year. It's okay for now. Safe and performs much better than using only spinning disks.
  5. A

    Install new Ceph Manager in the Production Cluster

    Guys, I created two new managers using the GUI and it worked. I'll see how it goes when the first node is back up and running. But for now, there have been no problems. Thanks!
  6. A

    Install new Ceph Manager in the Production Cluster

    Thank you for your help! My questions now are: A) Can I then install a new Manager and make it active, even if the first node is inactive, without this causing cluster confusion when the first node comes back? Is there any sequence of mandatory commands to pay attention to? B) I use 2 pools...
  7. A

    Install new Ceph Manager in the Production Cluster

    Hello everyone, I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on...
  8. A

    Install new Ceph Manager in the Production Cluster

    Hello everyone, I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on...
  9. A

    Opt-in Linux 6.2 Kernel for Proxmox VE 7.x available

    Proxmox host with kernel 6.2.9-1 is running (uptime) 13 days without any problems!
  10. A

    Btrfs vs ZFS on RAID1 root partition

    For 20/30 euros I want it too. Do you find shipping to Brazil? If you know how to indicate a link... I have paid that price (30 euros) in the consumer here. Here I don't find any used SSD for data center. When I find it, it's new and very expensive.
  11. A

    Btrfs vs ZFS on RAID1 root partition

    So, but I don't have the budget for enterprise class SSD's to use for booting. My RAID controllers also don't have battery backup. So I want to trust that RAID1 with BTRFS or ZFS can give me some protection. They can? Which would be better? I hear that ZFS is bad that it consumes a lot of...
  12. A

    Btrfs vs ZFS on RAID1 root partition

    Thanks for the comment. Buddy, I haven't had any problems with ZFS booting even when one of the disks is broken. That is, even though the RAIDZ mirror is degraded, I can boot Proxmox. I did this test a few times. It is true that with Btrfs I had this problem. Once the RAID1 Btrfs array is...
  13. A

    Btrfs vs ZFS on RAID1 root partition

    There are many differing opinions. There is no perfect file system. But now that Btrfs is also built into the Proxmox installer, could anything have improved? For OS-only use, putting most of the load on Ceph (VMs and CTs), wanting to prioritize performance, but mainly data security, high...
  14. A

    ZFS - expanding pool after replacing with larger disks

    Thanks! Can i expand the partition on-the-fly in Proxmox? Does any tool enable me to do this?