Search results

  1. P

    Network optimization for ceph.

    iperf3 results root@nd01:~# iperf3 -c nd02 Connecting to host nd02, port 5201 [ 5] local 10.50.253.1 port 40590 connected to 10.50.253.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 971 MBytes 8.14 Gbits/sec 328 1.07 MBytes [ 5]...
  2. P

    Network optimization for ceph.

    For several weeks now, I've been struggling to improve the performance of ceph on 3 nodes. Each node has 4 disks of 6 TB. + one NVME 1 TB where Rocksdb / Wal are taken out. I can't seem to get ceph to run fast enough. Below are my config files and test results: pveversion -v proxmox-ve: 7.4-1...
  3. P

    Ceph tier cache question

    And what about the implementation of dm-cache, which, as I understand it, came to replace tier-cache. Does it make sense to use it? Suppose that I have 4 HDDs of 4 TB with the names /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd, 1 SSD of 2 TB with the name /dev/sde and another SSD of 240 GB with...
  4. P

    Ceph tier cache question

    Thanks for the answer. Then some questions arise. 1. Can I partition an existing nvme into an equal number of partitions, for example, for each node 4 disks of 250 GB each. and specify these partitions when creating osd to store rocks-db and wal. 2. Is it possible to specify one nvme...
  5. P

    Ceph tier cache question

    Hi all. I have the following configuration. 3 nodes. Each node has 4 6TB disks and 1 1TB nvme disk. A total of 5 OSDs per node. I decided to launch the ceph caching functionality by following these steps: Will this have any effect? Or are all these steps useless? Possibly a misconfiguration?
  6. P

    VM migration problem

    Finally. The problem was the MTU of the switch. He was given 9000. After specifying 12000, everything worked correctly. Something like this )
  7. P

    VM migration problem

    I think the problem is with the network. A bridge was built on Linux Bridge. And I see some kind of inadequate work. For example, iperf3 shows a result of 0. [ 5] local 10.8.6.3 port 36416 connected to 10.8.6.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5]...
  8. P

    VM migration problem

    proxmox-ve: 6.4-1 (running kernel: 5.4.203-1-pve) pve-manager: 6.4-15 (running version: 6.4-15/af7986e6) pve-kernel-5.4: 6.4-20 pve-kernel-helper: 6.4-20 pve-kernel-5.4.203-1-pve: 5.4.203-1 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.1.5-pve2~bpo10+1 criu: 3.11-3 glusterfs-client: 5.5-3...
  9. P

    VM migration problem

    Colleagues, I had a problem with migrating a virtual machine from one cluster node to another when using lvmthin. The essence of the problem lies in the fact that the process seems to be going on, but the result is 0% () 2023-05-16 22:12:49 starting migration of VM 137 to node 'node02'...
  10. P

    V5.1 Reboot Error - Volume group "pve" not found

    End of story. I managed to clone via dd a 500 GB disk to a 1 TB disk. Copied for a long time - more than 8 hours. After that, using the gparted utility, I increased the partition from 465 GB to 550 GB. Rebooted the machine and vgchange -ay ssd worked successfully. The lvm partition was...
  11. P

    V5.1 Reboot Error - Volume group "pve" not found

    Unfortunately, I can not find software to solve the problem of information recovery. I need to restore entire LVM partitions and export them to raw. Now I'm trying to copy the data from the problem disk to a larger disk to test the theory that pv will stop complaining about the size.
  12. P

    V5.1 Reboot Error - Volume group "pve" not found

    Unfortunately, there is no way to return to the old controller. Perhaps it was a firmware issue. It was older than on other servers. But now the question is how to backup LVM, if vgchange -ay does not work. I am already considering the option of directly correcting the lvm configuration through...
  13. P

    V5.1 Reboot Error - Volume group "pve" not found

    UPD I made a backup of gpt partitions on another node since the logic of creating disks and the disks used are identical using the command: # sgdisk --backup={/path/to/file} {/dev/device/here} and restored it on the problem node using the command: # sgdisk --load-backup={/path/to/file}...
  14. P

    V5.1 Reboot Error - Volume group "pve" not found

    Hello. I have a similar problem. However, in my case, the RAID controller failed. When connecting a new controller, the disk configuration refused to be imported. I had to manually recreate the configuration in the controller utility without initializing the disks. What is the way out of this...
  15. P

    Ceph Nodes & PG's

    Greetings. I have a ceph cluster deployed on 3 nodes. Each node has 4 OSDs. There are 12 OSDs in total. Initially, 32 PGs were created. Autoscale PG is off. Recently ceph started warning that the recommended amount of PG is being used. I increased the number of PGs through the interface to the...
  16. P

    Save/Export VM/CT backups from PBS to USB disk for offsite/offline backups

    The function of exporting a virtual machine in zstd format directly from pbs is in great demand. for example I need to transfer a copy of a virtual machine to another host that does not have pbs access. The pbs repository is hosted on iscsi and is located outside the perimeter. So I have to make...
  17. P

    iscsi over zfs + ovios linux

    Great news. We will test.
  18. P

    Adding NFS Share as Datastore in Proxmox Backup Server

    I'm currently using a OviOS Linux as my main NFS storage and PBS as virtual machine. I'd like to continue to use network storage as my backup target. How would add my NFS share to the Proxmox Backup Server as storage?
  19. P

    iscsi over zfs + ovios linux

    I understand. But in my conditions, I would like to get integration with a ready-made distribution, the question is the limited staff who could serve the bundle + I already fully use NFS and SMB on OviOS. So must wait. Unfortunately, I am not a programmer, so there is little I can do to help...
  20. P

    Hide unused nodes for the user who has been given VM usage rights.

    I have a test lab deployed on three nodes for testing software on virtual machines. Quite often we have to give access to third parties to virtual machines to test their software in our environment. We do not want to disclose how many physical nodes we have, but they see the infrastructure...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!