Important topic, I hadn't read any issues about Intel X520 devices yet. I always read raves about them (82599 controllers). Does the problem also occur with load balance bounds as well (LACP, RR, etc)?
I've been reading that old HP devices gave a lot of heating or other problems. I don't know...
Hi all,
Regarding the 10Gbps cards, are the Intel X520 cards still the only ones indicated for Ceph or for VMs? (only 10g)
The HP NC522sfp and NC523sfp are still very bad and unresolved?
Are there other cheaper brands that might also serve you well?
How has the experience been in recent...
Maybe. But, thinking about it, it seems to me a BUG because I think Proxmox should recognize this device normally, since it is in the main line of the Linux Kernel itself.
Hello.
Thank you for your help. Your tip was on the fly!
root@pve-20:~# cat /var/lib/ceph/osd/ceph-0/fsid
2f6b54af-aec8-414e-a231-3cce47249463
root@pve-20:~# ceph-volume lvm activate --bluestore 0 2f6b54af-aec8-414e-a231-3cce47249463
Running command: /usr/bin/chown -R ceph:ceph...
Hi,
Is there a bug in Proxmox that prevents it from correctly seeing bcache devices as a regular storage device? I'm using Proxmox PVE 6.4-14, Linux 5.4.174-2-pve.
The bcache is a Linux kernel feature that allows you to use a small fast disk (flash, ssd, nvme, Optane, etc) as "cache" for a...
Hello,
I know this message is old, but please, I need to solve a similar problem. I'm trying to create an OSD using bcache drive. If it works, I intend to use bcache on all OSDs here. But when I try to build, in GUI bcache drives are not available for use. And from the CLI, the following error...
Hi,
I'm facing a network or firewall issue with my cluster that I don't even know where to start solving.
I have a Windows 2008 R2 Server VM that has a Bitdefender anti-virus and a Google Chrome browser.
Users access this server and make use of remote desktop (terminal service) on it.
So...
I think it must be some fine tuning.
One curious thing I noticed, is that writing is always taking place on the flash, never on the spinning disk. This is expected and should give the same fast response as the flash device. However, this is not what happens when going through bcache.
But when...
Hello guys.
I'm trying to set up a very fast enterprise NVMe (960GB datacenter devices with tantalum capacitors) as a cache for two or three isolated (I will use 2TB disks, but in these tests I used a 1TB one) spinning disks that I have on a Proxmox node.
The goal, depending on the results I...
Proxmox brings a proposal for a hyperconverged server, which offers, together with Ceph, the possibility of having virtualized storage and processing running on the same hardware.
But to get good performance results from Ceph storage, you must increase bandwidth and reduce disk and network...
I also see other metrics like SSDs to host the OSD's database.
Certainly 0.500-0.750 is a great result per Iops. In fact, I don't think I need to lower the final latency of my access to Ceph that much. Of course, the lower, the better. But as the budget here is low, I have to work with old...
How do I go about performing better network latency tests for use with Ceph on Proxmox?
The objective would be to use tests to determine the best cards or models of network cards, cables, transceivers and switches for use in Ceph cluster networks where the nodes containing the OSD's are located...
I am very grateful for all the tips I can get here. I have one more special question
I set up a system with few available hardware resources, set up a small cluster based on seven nodes with only one HDD type disk (spinning disk) on each unique node for use with Ceph as an OSD. Use on each node...
Guys, thanks so much for all the tips.
I made a change to my system yesterday and it looks like I've gotten acceptable results so far!
Let's see:
In the configuration of the VM, in the part where the SCSI controller is indicated, I was using VirtIO SCSI, I had already installed the drivers in...
Tankyou for your reply!
I find it difficult for CEPH to have control over this. I believe that the decision of "where" to send or receive data is in the Operating System, outside CEPH's control. I'm not sure, but I believe it. I did tests with simultaneous iperf from one node to several other...
Guys, thanks so much for all the tips. This debate is very valuable.
I'm testing two consumer NVMes (two very cheap Chinese brands) for performance with "fio". One is Netac, the other Xray. With a size of 256GB, the goal would be to buy 7 to use one on each node to use in DB/Wall disks. The...
Thank you very much for your reply.
The attention we are giving to this subject has been very important to me.
But I believe that if I really need a 10Gbps network on the backend, I'll need to go to another storage solution. And I will explain why.
A while ago I used two computers mirroring...
I do LACP with 4 gigabit ports with jumbo frame, exclusive to Ceph. In theory, 4 gigabit ways. Having seven OSD nodes with identical setup sharing the load, I can have something close to 3900 gigabit (full dulpex). I did the test with iperf and got roughly that in simultaneous traffic across...
Tanks for reply!
The only thing that is officially understood on the Ceph website is that with large loads you should not use Gbps. And I mean by "large" loads, many VMs running in a cluster, many clients accessing or large volumes of data (many terabytes) which is definitely not my case...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.