Search results

  1. A

    Proxmox VE Ceph Benchmark 2023/12 - Fast SSDs and network speeds in a Proxmox VE Ceph Reef cluster

    "Detail: I plan to use 6 physical ports of each switch for an LAG that will provide data communication between the switches to form the 'vPC' stack." -- At this point, I mean that I will use 6 ports for stacking (vPC). I don't know exactly how to calculate how many ports would be needed for...
  2. A

    Proxmox VE Ceph Benchmark 2023/12 - Fast SSDs and network speeds in a Proxmox VE Ceph Reef cluster

    I am setting up a new cluster with Ceph and plan to use two Cisco Nexus 3132q-x switches in a configuration similar to switch stacking, but in the case of Cisco Nexus, it is called "vPC". Each switch has 32 physical QSFP ports of 40Gbps that can be configured as Breakout, allowing each port to...
  3. A

    Ceph: Ports oversubscribed or cut-through?

    I am setting up a new cluster with Ceph and plan to use two Cisco Nexus 3132q-x switches in a configuration similar to switch stacking, but in the case of Cisco Nexus, it is called "vPC". Each switch has 32 physical QSFP ports of 40Gbps that can be configured as Breakout, allowing each port to...
  4. A

    Bcache in NVMe for 4K and fsync. Are IOPS limited?

    I've been running like this for more than a year. It's okay for now. Safe and performs much better than using only spinning disks.
  5. A

    Install new Ceph Manager in the Production Cluster

    Guys, I created two new managers using the GUI and it worked. I'll see how it goes when the first node is back up and running. But for now, there have been no problems. Thanks!
  6. A

    Install new Ceph Manager in the Production Cluster

    Thank you for your help! My questions now are: A) Can I then install a new Manager and make it active, even if the first node is inactive, without this causing cluster confusion when the first node comes back? Is there any sequence of mandatory commands to pay attention to? B) I use 2 pools...
  7. A

    Install new Ceph Manager in the Production Cluster

    Hello everyone, I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on...
  8. A

    Install new Ceph Manager in the Production Cluster

    Hello everyone, I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on...
  9. A

    Opt-in Linux 6.2 Kernel for Proxmox VE 7.x available

    Proxmox host with kernel 6.2.9-1 is running (uptime) 13 days without any problems!
  10. A

    Btrfs vs ZFS on RAID1 root partition

    For 20/30 euros I want it too. Do you find shipping to Brazil? If you know how to indicate a link... I have paid that price (30 euros) in the consumer here. Here I don't find any used SSD for data center. When I find it, it's new and very expensive.
  11. A

    Btrfs vs ZFS on RAID1 root partition

    So, but I don't have the budget for enterprise class SSD's to use for booting. My RAID controllers also don't have battery backup. So I want to trust that RAID1 with BTRFS or ZFS can give me some protection. They can? Which would be better? I hear that ZFS is bad that it consumes a lot of...
  12. A

    Btrfs vs ZFS on RAID1 root partition

    Thanks for the comment. Buddy, I haven't had any problems with ZFS booting even when one of the disks is broken. That is, even though the RAIDZ mirror is degraded, I can boot Proxmox. I did this test a few times. It is true that with Btrfs I had this problem. Once the RAID1 Btrfs array is...
  13. A

    Btrfs vs ZFS on RAID1 root partition

    There are many differing opinions. There is no perfect file system. But now that Btrfs is also built into the Proxmox installer, could anything have improved? For OS-only use, putting most of the load on Ceph (VMs and CTs), wanting to prioritize performance, but mainly data security, high...
  14. A

    ZFS - expanding pool after replacing with larger disks

    Thanks! Can i expand the partition on-the-fly in Proxmox? Does any tool enable me to do this?
  15. A

    Proxmox won't boot with degraded btrfs RAID1

    I tried to mount it from the Proxmox installation pen drive, but I couldn't, because it couldn't mount the degraded Btrfs. In theory, I would have to put the flags to allow this volume mounting in the Kernel line of the live Linux disk, I believe. It will be? How would you do that? Very...
  16. A

    ZFS - expanding pool after replacing with larger disks

    Hello. I have the same doubt. I installed standard Proxmox 7.3 installation with ZFS raid1 (on boot disks with zfs rpool for system root) on two 250GB disks and now I've swapped them both for 512GB disks. These are the system's boot disks. Two of equal size, but larger than the previous ones...
  17. A

    Proxmox won't boot with degraded btrfs RAID1

    Thanks for the answer. I had already found the publication of this link, but unfortunately it does not solve my case, because I no longer have the second disk working to boot and set the flags that it suggests. So the suggestion in that post would be to put "rootflags=degraded" on the kernel...
  18. A

    Proxmox won't boot with degraded btrfs RAID1

    In a standard installation of Proxmox version 7.3, the server was installed over a raid1 btrfs array of two mirrored disks. That was the boot disk. After working a few days, there was a problem with the hardware (probably) and the system crashed. Upon restarting, I noticed that one of the disks...
  19. A

    LAN network speeds are fine, but Proxmox and VM's have slow internet speeds.

    Proxmox 7.3 I'm having a very similar (if not the same) problem. On the host, using the physical interface, iperf connected to another external node, delivers 1Gbps (942Mbps). Correct. In the virtual machine, using the VirtIO interface, connected to vmbr0, Iperf delivers 400~500Mbps...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!