So having thought ceph probably uses those addresses to prioritize networks. I will set public to vm network range 10.1.0.0/16 .
so another question:
should pbs access the storage network using the public or cluster network?
we have this in our ceph.conf for few years.
public_network = 10.11.12.0/24
cluster_network = 10.11.12.0/24
pve vm's run at 10.1.10.0/24 on a seprate pair of switches used in lacp bond . corosync.conf uses 2 other nics and switches for cluster communications.
having read forum posts, pve...
Hello David
with Ceph 15 the script has the following issue.
ceph-deep-scrub-pg-ratio: line 104: $2: unbound variable
# line 104:
while read line; do set $line; echo $1 $($DATE -d "$2 $3" +%s); done | \
PS thank you for this script, we've been using it for a few years.
I want to manually edit corosync.conf to set ring network priorities . I am reading pve-docs/chapter-pvecm.html#pvecm_redundancy . my question is how do I set priority in the .conf ?
# pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20
this is the totem...
hello, I am looking to have our upgrade for ceph procedure updated. here is what we do now per 2017 notes:
1. apt update && apt full-upgrade
2. restart monitors, each after the other (wait for healthy...
does the zfs mount point have directories like dump , template etc?
of so there is a zfs option to fix this, i'll look for it as i used it again a few weeks ago
off topic: we are looking at upgrading ceph switches. currently using Quanta LB6M 10Gbe. we have 40Gbe cards . we think Mellanox/Nvidia switches running cumulus linux is the way to go.
however i know little on this subject. is cumulus a good fit in labs and clusters?
Hello Spirit. during the switchover to ifupdown2 I noticed a few warnings which you are probably already aware of. I assume these will not cause an issue. However there could be settings to avoid the warnings?
# ifreload -a
warning: bond0: attribute bond-min-links is set to '0'
and...
Hello,
at our pbs system a warning had flashed about ifupdown2 or ifdown2 missing, so i installed it. after doing so my existing bond did not work so had to change /etc/network/interfaces to new bond directives.
I have 5 pve nodes to get bond working on.
My question: is ifupdown2...
Hello, for lxc to pbs or local vzdump we get frequent fails on busy systems.
### from email
606: 2020-11-01 19:57:16 INFO: Starting Backup of VM 606 (lxc)
606: 2020-11-01 19:57:16 INFO: status = running
606: 2020-11-01 19:57:16 INFO: CT Name: bc-sys6-buster
606: 2020-11-01 19:57:16...
I had assumed that a remote sync target did not need GC , since it should be a duplicate of the main pbs system. however that is not the case.
the remote had 2TB+ more disk usage after a couple months of syncs. running gc fixed that.
question: once pbs is stable, should gc still be...
naturally after marking this solved there were more fails like this last two days:
- to local storage:
INFO: starting new backup job: vzdump --all 1 --mode snapshot --mailnotification failure --compress zstd --quiet 1 --mailto fbcadmin --storage z-local-nvme
INFO: skip external VMs: 108, 446...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.