Thanks for the answer. Can you give an example of what the filter should look like if on the main PBS I need to pick up only the last 3 copies of VM`s with ID 6000-6007
Hi all. I have the following question. There are 2 PBS One with a large amount of storage where backups are made from different Proxmox VE nodes, and the second with a small amount of storage. Is it possible to pick up certain virtual machine backups in the amount of 3 from PBS with large...
I would like to try using ssd as rbd persistent cache CEPH device. But it didn't work for me because I can't specify the DAX mount option. Is this functionality missing in PVE or am I doing something wrong?
I understand. Thanks for the tip. I just don’t understand the usage scenario itself yet) I watched the video, read HELP - the essence is not entirely clear.
Greetings. Can I completely disable SDN in a Proxmox VE 8 cluster. I don't plan to use it. If there is no way to remove it, how can I hide the SDN settings from the interface?
ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable)
I tested cluster performance with wal/db hosted on nvme and cleared osd several times. The last time I added them, the counters did not appear. + I rolled back to the configuration version "0" with the ceph config...
I have been configuring and testing ceph. When OSDs were first added, the performance values of the added osds were automatically added to the configuration database. However, at some point it stopped working. Should it work? Maybe some update came that turned off this functionality?
Running...
Can you clarify how the process of writing and reading on the OSD. From the official documentation, you can see the logic that exactly 1 OSD is used and not a group of OSD disks. That is, in fact, it writes to a specific OSD and then replicates to another OSD 2 nodes and also replicates to OSD 3...
First go to https://drive.google.com/drive/folders/1DA0I-X3qsn_qZbNJRoJ991B5QsNHFUoz
and download tgt-1.0.80 and install on OviOS.
then in https://drive.google.com/drive/folders/1Ov2rjgIFMWH3hR5FchwiouOsOQYXdHMC
The scenario is quite simple - using your internal services with fault tolerance.
Still, I don't understand one thing. When I used nvme to run a data pool on it, the network exchange rate also did not exceed 1.5 Gbit/s as well as with conventional HDDs. Why?
A situation has arisen. Set to MTU 9000 via the web interface. Used by OVS. After restarting the server, the MTU on some interfaces drops to 1500 - bond and ovs-system. How can I fix this?
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)...
My question was, however, whether the results I had provided were adequate. However, below are the full test results inside the virtual machine with various combinations.
Test results of Proxmox VM configurations on CEPH HDD
I changed the virtual machine settings. Also that in the first and second tests, the io-thread option is enabled, which is offered by default
agent: 1
boot: order=sata0;virtio0
cores: 4
cpu: host
machine: pc-q35-7.2
memory: 8192
meta: creation-qemu=7.2.0,ctime=1685091547
name: WINSRV2022
net0...
Yes, my drives are HDD type. Below is information about one of them. I understand that the performance on HDD will be an order of magnitude lower than on NVME or SSD, but now there is just such equipment. I want to understand what all the same possible optimal results I can get on this type of...
I did some tests and I can't figure out if they are normal or not? Who can explain? During the test, I used --io-size 4096. According to the documentation, data is transferred between nodes with this size.
Below is a link to a Google spreadsheet with the results.
Rbd Bench Results
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.