I don't have enough knowledge and examples from the official documentation. I don't understand how to implement sufficient redundancy for the Erasure Codes pool to allow one data center out of three to crash.
At the test site, I achieved an even distribution of servers in each datacenter. Five...
I need to implement fault tolerance at the datacenter level in the Proxmox VE hyperconverged cluster (pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.13-1-pve)) and Ceph Reef 18.2.1.
To test future changes, I created a virtual test bench in VirtualBox that closely mimics my cluster in...
Please help me with some advice. In my test scheme with three data centers, I need to create an Erasure Code pool for cold data.
I used the documentation
https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_ec_pools
My chosen k=6,m=3 (I also tried from the documentation k=4,m=2) in...
Please help me with your advice.
I need to implement fault tolerance at the datacenter level in the Proxmox VE hyperconverged cluster (pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.13-1-pve)) and Ceph Reef 18.2.1.
To test future changes, I created a virtual test bench in VirtualBox...
Upgrade PVE 8.0 -> 8.1
# pveversion
pve-manager/8.1.3/b46aac3b42da5d15 (running kernel: 6.5.11-4-pve)
# ceph -s
health: HEALTH_WARN
Module 'dashboard' has failed dependency: PyO3 modules may only be initialized once per interpreter process
# systemctl status ceph-mgr@pve1
Nov 25 12:58:16...
And where to get the utility pve7to8?
# pveversion
pve-manager/7.4-3/9002ab8a (running kernel: 5.15.107-1-pve)
# pve7to8
-bash: pve7to8: command not found
# apt show pve7to8
N: Unable to locate package pve7to8
N: Unable to locate package pve7to8
E: No packages found
# dpkg -S pve7to8...
I have read all the forum posts related to slow GC. I have to bring this topic up again and ask for help.
I use PBS 2.3-2. The work of the disk subsystem does not cause any complaints.
Carried out the tests:
1) with copying a single huge file.
2) unpacking their documents from the archive to...
I decided to use the ZFS functionality of changing the mount point
zfs set mountpoint=/mnt/zabbix SafePlace/zabbix
# zfs get mountpoint SafePlace/zabbix
NAME PROPERTY VALUE SOURCE
SafePlace/zabbix mountpoint /mnt/zabbix local
The hardware server for Proxmox Backup Server is very powerful. Due to budget constraints, I was forced to use part of the server's capacity for another task - MySQL for Zabbix.
I had to allocate space for huge Zabbix MySQL tables in the same SafePlace pool that Proxmox Backup Server uses to...
After adding Proxmox Backup Server 1.0-6 to the Proxmox VE 6.3-3 cluster, the storage size was incorrectly displayed.
The final size is too large due to the number of servers in the cluster, to each of which the Proxmox Backup Server is "connected".
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.