Search results

  1. P

    Pick up only 3 copies from Remote to Local

    Thanks for the answer. Can you give an example of what the filter should look like if on the main PBS I need to pick up only the last 3 copies of VM`s with ID 6000-6007
  2. P

    Pick up only 3 copies from Remote to Local

    Hi all. I have the following question. There are 2 PBS One with a large amount of storage where backups are made from different Proxmox VE nodes, and the second with a small amount of storage. Is it possible to pick up certain virtual machine backups in the amount of 3 from PBS with large...
  3. P

    RBD persistent cache support

    For example? How to use standard nvme or ssd for persistent cache?
  4. P

    RBD persistent cache support

    I answer my own question. I have nvme which I wanted to cache - doesn't support DAX.
  5. P

    RBD persistent cache support

    I would like to try using ssd as rbd persistent cache CEPH device. But it didn't work for me because I can't specify the DAX mount option. Is this functionality missing in PVE or am I doing something wrong?
  6. P

    Complete disabling of SDN in Proxmox VE 8

    I understand. Thanks for the tip. I just don’t understand the usage scenario itself yet) I watched the video, read HELP - the essence is not entirely clear.
  7. P

    Complete disabling of SDN in Proxmox VE 8

    Greetings. Can I completely disable SDN in a Proxmox VE 8 cluster. I don't plan to use it. If there is no way to remove it, how can I hide the SDN settings from the interface?
  8. P

    When adding a new osd to ceph, the osd_mclock_max_capacity_iops_[hdd, ssd] values do not appear in the configuration database

    ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable) I tested cluster performance with wal/db hosted on nvme and cleared osd several times. The last time I added them, the counters did not appear. + I rolled back to the configuration version "0" with the ceph config...
  9. P

    When adding a new osd to ceph, the osd_mclock_max_capacity_iops_[hdd, ssd] values do not appear in the configuration database

    I have been configuring and testing ceph. When OSDs were first added, the performance values of the added osds were automatically added to the configuration database. However, at some point it stopped working. Should it work? Maybe some update came that turned off this functionality? Running...
  10. P

    Network optimization for ceph.

    Can you clarify how the process of writing and reading on the OSD. From the official documentation, you can see the logic that exactly 1 OSD is used and not a group of OSD disks. That is, in fact, it writes to a specific OSD and then replicates to another OSD 2 nodes and also replicates to OSD 3...
  11. P

    Network optimization for ceph.

    No. I forced the nvme class
  12. P

    Incomprehensible situation with MTU OVS

    ip link set bond0 mtu 9000 Command solves the problem by increasing the MTU. But as far as I know this is not a solution. :eek:
  13. P

    First go to https://drive.google.com/drive/folders/1DA0I-X3qsn_qZbNJRoJ991B5QsNHFUoz and...

    First go to https://drive.google.com/drive/folders/1DA0I-X3qsn_qZbNJRoJ991B5QsNHFUoz and download tgt-1.0.80 and install on OviOS. then in https://drive.google.com/drive/folders/1Ov2rjgIFMWH3hR5FchwiouOsOQYXdHMC
  14. P

    Network optimization for ceph.

    The scenario is quite simple - using your internal services with fault tolerance. Still, I don't understand one thing. When I used nvme to run a data pool on it, the network exchange rate also did not exceed 1.5 Gbit/s as well as with conventional HDDs. Why?
  15. P

    Incomprehensible situation with MTU OVS

    A situation has arisen. Set to MTU 9000 via the web interface. Used by OVS. After restarting the server, the MTU on some interfaces drops to 1500 - bond and ovs-system. How can I fix this? proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve) pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)...
  16. P

    Network optimization for ceph.

    My question was, however, whether the results I had provided were adequate. However, below are the full test results inside the virtual machine with various combinations. Test results of Proxmox VM configurations on CEPH HDD
  17. P

    Network optimization for ceph.

    I changed the virtual machine settings. Also that in the first and second tests, the io-thread option is enabled, which is offered by default agent: 1 boot: order=sata0;virtio0 cores: 4 cpu: host machine: pc-q35-7.2 memory: 8192 meta: creation-qemu=7.2.0,ctime=1685091547 name: WINSRV2022 net0...
  18. P

    Network optimization for ceph.

    Yes, my drives are HDD type. Below is information about one of them. I understand that the performance on HDD will be an order of magnitude lower than on NVME or SSD, but now there is just such equipment. I want to understand what all the same possible optimal results I can get on this type of...
  19. P

    Network optimization for ceph.

    Yes. disks are really hdd. but the log and osd database are moved to nvme
  20. P

    Network optimization for ceph.

    I did some tests and I can't figure out if they are normal or not? Who can explain? During the test, I used --io-size 4096. According to the documentation, data is transferred between nodes with this size. Below is a link to a Google spreadsheet with the results. Rbd Bench Results

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!