Hi
I understand that a PVE backup job without retention config inherits the config of the PVE storage that's being use for the backup. But what's the purpose of Prune and GC on the PBS target itself? Does that get applied if the job has no retention config and the PVE storage has no retention...
There are no errors in dmesg. The lab kit is from our past generation production virtualisation (OnApp) cluster and ran with those SSDs and those controllers. The gear is IBM System X and the standard controller is RAID capable but running those drives as JBOD.
There are 3 other boxes of the...
Hi Dominik
As a test I just wrapped smartctl with a shell script that runs smartctl and exits with 0. I'm getting all the expected data in the PBS Web UI now. Clearly not a good solution but it appears the exit status doesn't impact on gathering the data.
Hi
I've just run up PBS 2.2-1 in our lab to test it out. Installation was onto an existing Debian 11 server. Everything is going pretty well except one issue I'm seeing is that it can't run smartctl on the drives. Trying to view the SMART values for a disk shows the attached (failed -...
We run a similar node configuration other than the networking. We run 40gbps ethernet for the Ceph traffic, but I'd run 100gbps these days (we built this cluster a couple of years ago). Moving Ceph away from 10GbE makes a big different. We did some benchmarking and shared the results here ...
This is one obvious feature that's missing from PVE (we love PVE by the way, just wish it had this). We do not want to start maintenance on a node until we know it's not running any workloads. We've ended up writing our own scripts to manage this but it's a clunky solution. All we want is a...
Hey d1_sen.
I would also have assumed that a firewall restart would have fixed this. And, I was under the impression that PVE 7 set the correct forwarding any time a firewall rule was pushed. Have you tried editing the rules of a VM to see if that enabled net.bridge.bridge-nf-call-iptables ?
To see if it's the same problem check the output of
sysctl net.bridge.bridge-nf-call-iptables
If that's set to 0 then the VM traffic isn't being passed through iptables. Setting that to 1 will fix things.
* Undo what you've done to setup your CephFS.
* In the GUI Node level Ceph menu create a pool with the settings you want
* In the GUI Datacenter level Storage menu Add a new RBD storage using the pool
David
...
The comments on this thread regarding a ceph upgrade without a reboot match our experience. We upgraded to 6.3 then after testing everything was OK we upgraded Ceph to Octopus. We rebooted during the PVE upgrade but did not reboot after the Ceph upgrade. That matches what others have said...
@t.lamprecht, this is marked as solved but I don't know what the solution is.
I am seeing this same problem. We have a prod cluster and a dev cluster that we upgraded to 6.3 & Octopus about 5 weeks ago. We recently noticed spoofed traffic coming out of a prod cluster node and (with the help...
It would be good if this was a configurable setting. The hard coded value may not be appropriate for all environments. We'd like to run a non-default value but don't feel that editing that script on each node every time we upgrade is a sensible approach.
Thanks
David
Yes, it's a mess and a really bad feature release. I raised a ticket with whmcs and they said in a reply that they are considering letting us disable this feature in a future version. They don't seam to understand (or care) how bad this feature or how much impact it has for people using...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.