Yes, I currently have one OSD per Host. In this case, one HD for each host. There are 7 hosts in total. I plan to increase this in the future. I plan to put at least three disks per host. I thought of putting maybe one flash drive per host, just for DB and Wall of OSDs.
Tanks !
In my configuration I have 512 PGs.
Let's see what is returned when I type the suggested command:
root@pve-11:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.90959 1.00000 931 GiB 64 GiB 63 GiB 148 KiB 1024...
I have read on Ceph's official website that their proposal is a distributed storage system with common hardware. He goes further, saying that it is recommended to use SSD and networks starting at 10 Gbps, but that it can work normally with HD's and Gigabit networks when the load is small.
Well...
Hey guys.
I bought an adapter card to plug my nvme driver into a PCI-e slot. This board has a super capacitor, capable of keeping the driver on for seconds in case of power failure. Enough time for it to write any data that is pending.
It works like a UPS plugged directly into the NVMe driver...
To be clearer, the adapter I mentioned above is quite cheap and allows the use of consumer NVMe in PCIe 4X slots, in addition to containing a super capacitor which, according to the tests in the link above, works perfectly as a UPS directly before the NVMe, keeping it running for several seconds...
Let's see what an interesting product...
I use Ceph (bluestore) with several nodes, but each one of them has only 1 HD. I want to add more hard drives to each node, but I see that the performance is not satisfactory for virtual machine disks on HD OSD's. So I'm looking to add NVMe's for walls...
I am creating a cluster with some nodes. Initially, the nodes were installed separately in an own 172.16.0.0/16 installation test network. There was no Cluster until then, the nodes worked alone.
So, I changed the configuration of the nodes to three definitive networks, 172.17.0.0/16...
Would anyone know how much bandwidth Corosync uses?
I have some old 3COM switches of 10/100 Mb/s with 24 ports each. They are old, but they are in great condition. I am thinking of using them in a small Cluster of about 17 servers.
The idea would be to place two switches just to connect...
I am also trying to install Proxmox 6.2-4 and I am having the same problem. I have already updated the BIOS of my HP Z400 to the latest available version 3.61 Rev.A, but the problem remains.
The GRUB screen appears and after selecting the installation option, it starts loading drivers, waits...
Great!
And now, how can I pass the superuser password as a parameter to the "pvecm add" command? I would like to have as little user intervention as possible.
Tanks
I'm creating a bash script to try to join nodes in a cluster automatically.
When I try to join the node to the cluster, with the pvecm command, it asks if I want to accept the figerprint and also asks the password of the superuser of the master node. Theoretically I would be able to pass the...
Hello.
I modified the IP of a node in my cluster before joining the cluster and everything was working normally.
I modified it only in the web administration screen and in the '/etc/hosts' file.
It turns out that after creating the cluster, I realized that I wanted to modify the IP of the...
Dear,
I need to modify the management IP of my Cluster nodes. I want to put them on another subnet.
The Cluster is already formed and Ceph is working on another subnet.
There is a physical network interface connected to the vmbr0 bridge in a subnet (192.168.0.0/24), where the virtual machines...
Dear Moayad
Tankyou for your reply.
What version of Ceph and Proxmox did you test this command for?
Why here, I'm using Ceph Nautilus and Proxmox 6.2-4 and it didn't work. See the output of the suggested command, below:
root@pve-11:~# pvesh get storage --output-format yaml
---
-
content...
Please,
Can i get the storage status (enabled/disabled) via pvesh?
I can put this information in:
pvesh put /storage/{storage}/disable <boolean>
I can get some information in:
pvesh get /storage/{storage}
But, can i get the information about if the storage is disabled or not?
Would it be...
I managed to solve the problem. I was using passive LACP on the Switch ports to communicate with the servers. However, I read in the switch manual that I would need to put in a static LACP group to be able to travel several VLANs. I did the setup and solved the problem.
Thank you!
Hello!
I'm new to Proxmox and I'm testing environment settings for possible adoption of the tool in production. I have some servers with 4 network cards (1Gbps) and I tried to configure all of them in Bond LACP layer 2 + 3. Objective is to have fault tolerance and better performance in all...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.