"Detail: I plan to use 6 physical ports of each switch for an LAG that will provide data communication between the switches to form the 'vPC' stack." -- At this point, I mean that I will use 6 ports for stacking (vPC). I don't know exactly how to calculate how many ports would be needed for...
I am setting up a new cluster with Ceph and plan to use two Cisco Nexus 3132q-x switches in a configuration similar to switch stacking, but in the case of Cisco Nexus, it is called "vPC".
Each switch has 32 physical QSFP ports of 40Gbps that can be configured as Breakout, allowing each port to...
I am setting up a new cluster with Ceph and plan to use two Cisco Nexus 3132q-x switches in a configuration similar to switch stacking, but in the case of Cisco Nexus, it is called "vPC".
Each switch has 32 physical QSFP ports of 40Gbps that can be configured as Breakout, allowing each port to...
Guys, I created two new managers using the GUI and it worked. I'll see how it goes when the first node is back up and running. But for now, there have been no problems.
Thanks!
Thank you for your help! My questions now are:
A) Can I then install a new Manager and make it active, even if the first node is inactive, without this causing cluster confusion when the first node comes back? Is there any sequence of mandatory commands to pay attention to?
B) I use 2 pools...
Hello everyone,
I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on...
Hello everyone,
I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on...
For 20/30 euros I want it too. Do you find shipping to Brazil? If you know how to indicate a link... I have paid that price (30 euros) in the consumer here.
Here I don't find any used SSD for data center. When I find it, it's new and very expensive.
So, but I don't have the budget for enterprise class SSD's to use for booting. My RAID controllers also don't have battery backup. So I want to trust that RAID1 with BTRFS or ZFS can give me some protection. They can? Which would be better? I hear that ZFS is bad that it consumes a lot of...
Thanks for the comment.
Buddy, I haven't had any problems with ZFS booting even when one of the disks is broken. That is, even though the RAIDZ mirror is degraded, I can boot Proxmox. I did this test a few times.
It is true that with Btrfs I had this problem. Once the RAID1 Btrfs array is...
There are many differing opinions. There is no perfect file system. But now that Btrfs is also built into the Proxmox installer, could anything have improved?
For OS-only use, putting most of the load on Ceph (VMs and CTs), wanting to prioritize performance, but mainly data security, high...
I tried to mount it from the Proxmox installation pen drive, but I couldn't, because it couldn't mount the degraded Btrfs. In theory, I would have to put the flags to allow this volume mounting in the Kernel line of the live Linux disk, I believe. It will be? How would you do that? Very...
Hello.
I have the same doubt.
I installed standard Proxmox 7.3 installation with ZFS raid1 (on boot disks with zfs rpool for system root) on two 250GB disks and now I've swapped them both for 512GB disks. These are the system's boot disks. Two of equal size, but larger than the previous ones...
Thanks for the answer.
I had already found the publication of this link, but unfortunately it does not solve my case, because I no longer have the second disk working to boot and set the flags that it suggests. So the suggestion in that post would be to put "rootflags=degraded" on the kernel...
In a standard installation of Proxmox version 7.3, the server was installed over a raid1 btrfs array of two mirrored disks. That was the boot disk. After working a few days, there was a problem with the hardware (probably) and the system crashed. Upon restarting, I noticed that one of the disks...
Proxmox 7.3
I'm having a very similar (if not the same) problem.
On the host, using the physical interface, iperf connected to another external node, delivers 1Gbps (942Mbps). Correct.
In the virtual machine, using the VirtIO interface, connected to vmbr0, Iperf delivers 400~500Mbps...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.