Hello everyone,
I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on...
Hello everyone,
I've set up a highly available hyper-converged Proxmox 7.4-3 cluster with Ceph Quincy (17.2.5), featuring ten nodes, with the first three as monitors, and only the first node acting as a Ceph Manager. Each node has two OSDs. There are two pools in Ceph, each linked to one OSD on...
There are many differing opinions. There is no perfect file system. But now that Btrfs is also built into the Proxmox installer, could anything have improved?
For OS-only use, putting most of the load on Ceph (VMs and CTs), wanting to prioritize performance, but mainly data security, high...
In a standard installation of Proxmox version 7.3, the server was installed over a raid1 btrfs array of two mirrored disks. That was the boot disk. After working a few days, there was a problem with the hardware (probably) and the system crashed. Upon restarting, I noticed that one of the disks...
Hi,
Is there a bug in Proxmox that prevents it from correctly seeing bcache devices as a regular storage device? I'm using Proxmox PVE 6.4-14, Linux 5.4.174-2-pve.
The bcache is a Linux kernel feature that allows you to use a small fast disk (flash, ssd, nvme, Optane, etc) as "cache" for a...
Hi,
I'm facing a network or firewall issue with my cluster that I don't even know where to start solving.
I have a Windows 2008 R2 Server VM that has a Bitdefender anti-virus and a Google Chrome browser.
Users access this server and make use of remote desktop (terminal service) on it.
So...
Hello guys.
I'm trying to set up a very fast enterprise NVMe (960GB datacenter devices with tantalum capacitors) as a cache for two or three isolated (I will use 2TB disks, but in these tests I used a 1TB one) spinning disks that I have on a Proxmox node.
The goal, depending on the results I...
Proxmox brings a proposal for a hyperconverged server, which offers, together with Ceph, the possibility of having virtualized storage and processing running on the same hardware.
But to get good performance results from Ceph storage, you must increase bandwidth and reduce disk and network...
How do I go about performing better network latency tests for use with Ceph on Proxmox?
The objective would be to use tests to determine the best cards or models of network cards, cables, transceivers and switches for use in Ceph cluster networks where the nodes containing the OSD's are located...
I am very grateful for all the tips I can get here. I have one more special question
I set up a system with few available hardware resources, set up a small cluster based on seven nodes with only one HDD type disk (spinning disk) on each unique node for use with Ceph as an OSD. Use on each node...
I have read on Ceph's official website that their proposal is a distributed storage system with common hardware. He goes further, saying that it is recommended to use SSD and networks starting at 10 Gbps, but that it can work normally with HD's and Gigabit networks when the load is small.
Well...
Hey guys.
I bought an adapter card to plug my nvme driver into a PCI-e slot. This board has a super capacitor, capable of keeping the driver on for seconds in case of power failure. Enough time for it to write any data that is pending.
It works like a UPS plugged directly into the NVMe driver...
Let's see what an interesting product...
I use Ceph (bluestore) with several nodes, but each one of them has only 1 HD. I want to add more hard drives to each node, but I see that the performance is not satisfactory for virtual machine disks on HD OSD's. So I'm looking to add NVMe's for walls...
I am creating a cluster with some nodes. Initially, the nodes were installed separately in an own 172.16.0.0/16 installation test network. There was no Cluster until then, the nodes worked alone.
So, I changed the configuration of the nodes to three definitive networks, 172.17.0.0/16...
Would anyone know how much bandwidth Corosync uses?
I have some old 3COM switches of 10/100 Mb/s with 24 ports each. They are old, but they are in great condition. I am thinking of using them in a small Cluster of about 17 servers.
The idea would be to place two switches just to connect...
I'm creating a bash script to try to join nodes in a cluster automatically.
When I try to join the node to the cluster, with the pvecm command, it asks if I want to accept the figerprint and also asks the password of the superuser of the master node. Theoretically I would be able to pass the...
Hello.
I modified the IP of a node in my cluster before joining the cluster and everything was working normally.
I modified it only in the web administration screen and in the '/etc/hosts' file.
It turns out that after creating the cluster, I realized that I wanted to modify the IP of the...
Please,
Can i get the storage status (enabled/disabled) via pvesh?
I can put this information in:
pvesh put /storage/{storage}/disable <boolean>
I can get some information in:
pvesh get /storage/{storage}
But, can i get the information about if the storage is disabled or not?
Would it be...
Hello!
I'm new to Proxmox and I'm testing environment settings for possible adoption of the tool in production. I have some servers with 4 network cards (1Gbps) and I tried to configure all of them in Bond LACP layer 2 + 3. Objective is to have fault tolerance and better performance in all...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.