Search results

  1. A

    10 Gb card : Broadcom 57810 vs Intel X520

    About Mellanox mcx342 or 312, does anyone know of them for use with virtual machines in Proxmox?
  2. A

    10 Gb card : Broadcom 57810 vs Intel X520

    Important topic, I hadn't read any issues about Intel X520 devices yet. I always read raves about them (82599 controllers). Does the problem also occur with load balance bounds as well (LACP, RR, etc)? I've been reading that old HP devices gave a lot of heating or other problems. I don't know...
  3. A

    10 Gb card : Broadcom 57810 vs Intel X520

    Hi all, Regarding the 10Gbps cards, are the Intel X520 cards still the only ones indicated for Ceph or for VMs? (only 10g) The HP NC522sfp and NC523sfp are still very bad and unresolved? Are there other cheaper brands that might also serve you well? How has the experience been in recent...
  4. A

    PVE GUI doesn't recognize kernel bcache device?

    Maybe. But, thinking about it, it seems to me a BUG because I think Proxmox should recognize this device normally, since it is in the main line of the Linux Kernel itself.
  5. A

    PVE GUI doesn't recognize kernel bcache device?

    Hello. Thank you for your help. Your tip was on the fly! root@pve-20:~# cat /var/lib/ceph/osd/ceph-0/fsid 2f6b54af-aec8-414e-a231-3cce47249463 root@pve-20:~# ceph-volume lvm activate --bluestore 0 2f6b54af-aec8-414e-a231-3cce47249463 Running command: /usr/bin/chown -R ceph:ceph...
  6. A

    PVE GUI doesn't recognize kernel bcache device?

    Hi, Is there a bug in Proxmox that prevents it from correctly seeing bcache devices as a regular storage device? I'm using Proxmox PVE 6.4-14, Linux 5.4.174-2-pve. The bcache is a Linux kernel feature that allows you to use a small fast disk (flash, ssd, nvme, Optane, etc) as "cache" for a...
  7. A

    PVE Ceph- should I use bcache?

    Hello, I know this message is old, but please, I need to solve a similar problem. I'm trying to create an OSD using bcache drive. If it works, I intend to use bcache on all OSDs here. But when I try to build, in GUI bcache drives are not available for use. And from the CLI, the following error...
  8. A

    VM does not access websites depending on the node it is running

    Hi, I'm facing a network or firewall issue with my cluster that I don't even know where to start solving. I have a Windows 2008 R2 Server VM that has a Bitdefender anti-virus and a Google Chrome browser. Users access this server and make use of remote desktop (terminal service) on it. So...
  9. A

    Bcache in NVMe for 4K and fsync. Are IOPS limited?

    I think it must be some fine tuning. One curious thing I noticed, is that writing is always taking place on the flash, never on the spinning disk. This is expected and should give the same fast response as the flash device. However, this is not what happens when going through bcache. But when...
  10. A

    Bcache in NVMe for 4K and fsync. Are IOPS limited?

    Hello guys. I'm trying to set up a very fast enterprise NVMe (960GB datacenter devices with tantalum capacitors) as a cache for two or three isolated (I will use 2TB disks, but in these tests I used a 1TB one) spinning disks that I have on a Proxmox node. The goal, depending on the results I...
  11. A

    How to maintain high processing clock frequency for Ceph?

    Proxmox brings a proposal for a hyperconverged server, which offers, together with Ceph, the possibility of having virtualized storage and processing running on the same hardware. But to get good performance results from Ceph storage, you must increase bandwidth and reduce disk and network...
  12. A

    Ultra-low network latency test for Ceph on Proxmox

    I also see other metrics like SSDs to host the OSD's database. Certainly 0.500-0.750 is a great result per Iops. In fact, I don't think I need to lower the final latency of my access to Ceph that much. Of course, the lower, the better. But as the budget here is low, I have to work with old...
  13. A

    Ultra-low network latency test for Ceph on Proxmox

    How do I go about performing better network latency tests for use with Ceph on Proxmox? The objective would be to use tests to determine the best cards or models of network cards, cables, transceivers and switches for use in Ceph cluster networks where the nodes containing the OSD's are located...
  14. A

    Ceph with writeback cache is secure?

    I am very grateful for all the tips I can get here. I have one more special question I set up a system with few available hardware resources, set up a small cluster based on seven nodes with only one HDD type disk (spinning disk) on each unique node for use with Ceph as an OSD. Use on each node...
  15. A

    Ceph performance with simple hardware. Slow writing.

    Guys, thanks so much for all the tips. I made a change to my system yesterday and it looks like I've gotten acceptable results so far! Let's see: In the configuration of the VM, in the part where the SCSI controller is indicated, I was using VirtIO SCSI, I had already installed the drivers in...
  16. A

    Ceph performance with simple hardware. Slow writing.

    Tankyou for your reply! I find it difficult for CEPH to have control over this. I believe that the decision of "where" to send or receive data is in the Operating System, outside CEPH's control. I'm not sure, but I believe it. I did tests with simultaneous iperf from one node to several other...
  17. A

    Ceph performance with simple hardware. Slow writing.

    Guys, thanks so much for all the tips. This debate is very valuable. I'm testing two consumer NVMes (two very cheap Chinese brands) for performance with "fio". One is Netac, the other Xray. With a size of 256GB, the goal would be to buy 7 to use one on each node to use in DB/Wall disks. The...
  18. A

    Ceph performance with simple hardware. Slow writing.

    Thank you very much for your reply. The attention we are giving to this subject has been very important to me. But I believe that if I really need a 10Gbps network on the backend, I'll need to go to another storage solution. And I will explain why. A while ago I used two computers mirroring...
  19. A

    Ceph performance with simple hardware. Slow writing.

    I do LACP with 4 gigabit ports with jumbo frame, exclusive to Ceph. In theory, 4 gigabit ways. Having seven OSD nodes with identical setup sharing the load, I can have something close to 3900 gigabit (full dulpex). I did the test with iperf and got roughly that in simultaneous traffic across...
  20. A

    Ceph performance with simple hardware. Slow writing.

    Tanks for reply! The only thing that is officially understood on the Ceph website is that with large loads you should not use Gbps. And I mean by "large" loads, many VMs running in a cluster, many clients accessing or large volumes of data (many terabytes) which is definitely not my case...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!