Recent content by Vasilisc

  1. V

    Erasure Code and failure-domain=datacenter

    I don't have enough knowledge and examples from the official documentation. I don't understand how to implement sufficient redundancy for the Erasure Codes pool to allow one data center out of three to crash. At the test site, I achieved an even distribution of servers in each datacenter. Five...
  2. V

    Erasure Code and failure-domain=datacenter

    I need to implement fault tolerance at the datacenter level in the Proxmox VE hyperconverged cluster (pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.13-1-pve)) and Ceph Reef 18.2.1. To test future changes, I created a virtual test bench in VirtualBox that closely mimics my cluster in...
  3. V

    Erasure Code and failure-domain=datacenter

    Please help me with some advice. In my test scheme with three data centers, I need to create an Erasure Code pool for cold data. I used the documentation https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_ec_pools My chosen k=6,m=3 (I also tried from the documentation k=4,m=2) in...
  4. V

    [SOLVED] Proxmox VE + Ceph cluster and two datacenters. Too many objects are misplaced; try again later.

    Thank you very much! Everything worked out: I added a third datacenter, moved some of the servers to it. The Ceph cluster has balanced the data.
  5. V

    [SOLVED] Proxmox VE + Ceph cluster and two datacenters. Too many objects are misplaced; try again later.

    Please help me with your advice. I need to implement fault tolerance at the datacenter level in the Proxmox VE hyperconverged cluster (pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.13-1-pve)) and Ceph Reef 18.2.1. To test future changes, I created a virtual test bench in VirtualBox...
  6. V

    ceph warning post upgrade to v8

    I updated my cluster and the problem was fixed. Thanks for the great job. pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-7-pve)
  7. V

    ceph warning post upgrade to v8

    Upgrade PVE 8.0 -> 8.1 # pveversion pve-manager/8.1.3/b46aac3b42da5d15 (running kernel: 6.5.11-4-pve) # ceph -s health: HEALTH_WARN Module 'dashboard' has failed dependency: PyO3 modules may only be initialized once per interpreter process # systemctl status ceph-mgr@pve1 Nov 25 12:58:16...
  8. V

    Proxmox VE 8.0 (beta) released!

    And where to get the utility pve7to8? # pveversion pve-manager/7.4-3/9002ab8a (running kernel: 5.15.107-1-pve) # pve7to8 -bash: pve7to8: command not found # apt show pve7to8 N: Unable to locate package pve7to8 N: Unable to locate package pve7to8 E: No packages found # dpkg -S pve7to8...
  9. V

    [SOLVED] Garbage collector too slow (again).

    I'm very sorry, but I just waited a little longer and the reading speed increased. Perhaps earlier changes to the ZFS ARC settings helped.
  10. V

    [SOLVED] Garbage collector too slow (again).

    RAM free -h --si total used free shared buff/cache available Mem: 64G 1.5G 62G 3.0M 269M 62G Swap: 8.2G 0B 8.2G CPU lscpu fio --filename=test --direct=1 --sync=1 --rw=randread...
  11. V

    [SOLVED] Garbage collector too slow (again).

    I have read all the forum posts related to slow GC. I have to bring this topic up again and ask for help. I use PBS 2.3-2. The work of the disk subsystem does not cause any complaints. Carried out the tests: 1) with copying a single huge file. 2) unpacking their documents from the archive to...
  12. V

    [SOLVED] The problem when using resources in parallel

    I decided to use the ZFS functionality of changing the mount point zfs set mountpoint=/mnt/zabbix SafePlace/zabbix # zfs get mountpoint SafePlace/zabbix NAME PROPERTY VALUE SOURCE SafePlace/zabbix mountpoint /mnt/zabbix local
  13. V

    [SOLVED] The problem when using resources in parallel

    The hardware server for Proxmox Backup Server is very powerful. Due to budget constraints, I was forced to use part of the server's capacity for another task - MySQL for Zabbix. I had to allocate space for huge Zabbix MySQL tables in the same SafePlace pool that Proxmox Backup Server uses to...
  14. V

    PVE 6.3 the storage size was displayed incorrectly.

    After adding Proxmox Backup Server 1.0-6 to the Proxmox VE 6.3-3 cluster, the storage size was incorrectly displayed. The final size is too large due to the number of servers in the cluster, to each of which the Proxmox Backup Server is "connected".

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!