now i am planning to buy a new server dedicated to pbs, to backup around 50-100 vms
we keep growing, and i would like to add storage on demand..
can i easily add storage to the main backup pool?
I have had some weird behaviour.
A couple times I have gottent Exclamation Marks on the Node and VMs the Proxmox Node which my PBS was running from. I think this had to do with exceeding the Hard drive space allocated, hence resulting in an I/O Error.
I freed up some space on that particular...
Hi everyone,
I am trying to figure out how to configure two separate clusters which I need to run. Each cluster will contain 5 nodes - 4 Dell R720s and a supermicro 36-bay server for the storage making it 8 Dells and 2 Supermicros in total. The Supermicros will house 20 x 14TB SATA drives per...
Hallo zusammen,
ich habe 3 Node Cluster mit je 3 OSD in jedem dieser Nodes.
Meine Ceph Version ist: 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable).
Der Pool hat Replica 3/2 mit 128pg. Sobald ich eine VM aus einem Backup, das auf einem NFS Share liegt, herstelle zeigt der...
Hello,
in my cluster consisting of 4 OSD nodes there's a HDD failure.
This affects currently 31 disks.
Each node has 48 HDDs à 2TB connected.
This results in this crushmap:
root hdd_strgbox {
id -17 # do not change unnecessarily
id -19 class hdd # do not change...
I just recently upgraded the drives in my zfs pool, and while in the shell zpool shows correct amount of available space (about 11tb 4x4tb in raidz2). Promox is only showing just under 7tb worth of available space in the pool. Is there a way to refresh this or reset it? Because now my virtual...
I have 3 nodes with 2 x 1TB HDD and 2 x 256G SSD's each.
I have the following configuration:
1 SSD is used as system drive (LVM partitioned so bout a third is used for the system partition and the rest is used in 2 partitions for the 2 x HDD's WALs.
The 2 x HDD are in a pool (the default...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.