Search results

  1. P

    Network optimization for ceph.

    No. I forced the nvme class
  2. P

    Incomprehensible situation with MTU OVS

    ip link set bond0 mtu 9000 Command solves the problem by increasing the MTU. But as far as I know this is not a solution. :eek:
  3. P

    First go to https://drive.google.com/drive/folders/1DA0I-X3qsn_qZbNJRoJ991B5QsNHFUoz and...

    First go to https://drive.google.com/drive/folders/1DA0I-X3qsn_qZbNJRoJ991B5QsNHFUoz and download tgt-1.0.80 and install on OviOS. then in https://drive.google.com/drive/folders/1Ov2rjgIFMWH3hR5FchwiouOsOQYXdHMC
  4. P

    Network optimization for ceph.

    The scenario is quite simple - using your internal services with fault tolerance. Still, I don't understand one thing. When I used nvme to run a data pool on it, the network exchange rate also did not exceed 1.5 Gbit/s as well as with conventional HDDs. Why?
  5. P

    Incomprehensible situation with MTU OVS

    A situation has arisen. Set to MTU 9000 via the web interface. Used by OVS. After restarting the server, the MTU on some interfaces drops to 1500 - bond and ovs-system. How can I fix this? proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve) pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)...
  6. P

    Network optimization for ceph.

    My question was, however, whether the results I had provided were adequate. However, below are the full test results inside the virtual machine with various combinations. Test results of Proxmox VM configurations on CEPH HDD
  7. P

    Network optimization for ceph.

    I changed the virtual machine settings. Also that in the first and second tests, the io-thread option is enabled, which is offered by default agent: 1 boot: order=sata0;virtio0 cores: 4 cpu: host machine: pc-q35-7.2 memory: 8192 meta: creation-qemu=7.2.0,ctime=1685091547 name: WINSRV2022 net0...
  8. P

    Network optimization for ceph.

    Yes, my drives are HDD type. Below is information about one of them. I understand that the performance on HDD will be an order of magnitude lower than on NVME or SSD, but now there is just such equipment. I want to understand what all the same possible optimal results I can get on this type of...
  9. P

    Network optimization for ceph.

    Yes. disks are really hdd. but the log and osd database are moved to nvme
  10. P

    Network optimization for ceph.

    I did some tests and I can't figure out if they are normal or not? Who can explain? During the test, I used --io-size 4096. According to the documentation, data is transferred between nodes with this size. Below is a link to a Google spreadsheet with the results. Rbd Bench Results
  11. P

    Network optimization for ceph.

    iperf3 results root@nd01:~# iperf3 -c nd02 Connecting to host nd02, port 5201 [ 5] local 10.50.253.1 port 40590 connected to 10.50.253.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 971 MBytes 8.14 Gbits/sec 328 1.07 MBytes [ 5]...
  12. P

    Network optimization for ceph.

    For several weeks now, I've been struggling to improve the performance of ceph on 3 nodes. Each node has 4 disks of 6 TB. + one NVME 1 TB where Rocksdb / Wal are taken out. I can't seem to get ceph to run fast enough. Below are my config files and test results: pveversion -v proxmox-ve: 7.4-1...
  13. P

    Ceph tier cache question

    And what about the implementation of dm-cache, which, as I understand it, came to replace tier-cache. Does it make sense to use it? Suppose that I have 4 HDDs of 4 TB with the names /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd, 1 SSD of 2 TB with the name /dev/sde and another SSD of 240 GB with...
  14. P

    Ceph tier cache question

    Thanks for the answer. Then some questions arise. 1. Can I partition an existing nvme into an equal number of partitions, for example, for each node 4 disks of 250 GB each. and specify these partitions when creating osd to store rocks-db and wal. 2. Is it possible to specify one nvme...
  15. P

    Ceph tier cache question

    Hi all. I have the following configuration. 3 nodes. Each node has 4 6TB disks and 1 1TB nvme disk. A total of 5 OSDs per node. I decided to launch the ceph caching functionality by following these steps: Will this have any effect? Or are all these steps useless? Possibly a misconfiguration?
  16. P

    VM migration problem

    Finally. The problem was the MTU of the switch. He was given 9000. After specifying 12000, everything worked correctly. Something like this )
  17. P

    VM migration problem

    I think the problem is with the network. A bridge was built on Linux Bridge. And I see some kind of inadequate work. For example, iperf3 shows a result of 0. [ 5] local 10.8.6.3 port 36416 connected to 10.8.6.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5]...
  18. P

    VM migration problem

    proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve) pve-manager: 6.4-15 (running version: 6.4-15/af7986e6) pve-kernel-5.4: 6.4-20 pve-kernel-helper: 6.4-20 pve-kernel-5.4.203-1-pve: 5.4.203-1 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.1.5-pve2~bpo10+1 criu: 3.11-3 glusterfs-client: 5.5-3...
  19. P

    VM migration problem

    Colleagues, I had a problem with migrating a virtual machine from one cluster node to another when using lvmthin. The essence of the problem lies in the fact that the process seems to be going on, but the result is 0% () 2023-05-16 22:12:49 starting migration of VM 137 to node 'node02'...
  20. P

    V5.1 Reboot Error - Volume group "pve" not found

    End of story. I managed to clone via dd a 500 GB disk to a 1 TB disk. Copied for a long time - more than 8 hours. After that, using the gparted utility, I increased the partition from 465 GB to 550 GB. Rebooted the machine and vgchange -ay ssd worked successfully. The lvm partition was...