Search results

  1. ozdjh

    PBS Prune and GC vs Job / Storage retention

    Hi I understand that a PVE backup job without retention config inherits the config of the PVE storage that's being use for the backup. But what's the purpose of Prune and GC on the PBS target itself? Does that get applied if the job has no retention config and the PVE storage has no retention...
  2. ozdjh

    smartctl fails to get SMART values

    There are no errors in dmesg. The lab kit is from our past generation production virtualisation (OnApp) cluster and ran with those SSDs and those controllers. The gear is IBM System X and the standard controller is RAID capable but running those drives as JBOD. There are 3 other boxes of the...
  3. ozdjh

    smartctl fails to get SMART values

    Ok, no worries. Please see the attached.
  4. ozdjh

    smartctl fails to get SMART values

    Hi Dominik As a test I just wrapped smartctl with a shell script that runs smartctl and exits with 0. I'm getting all the expected data in the PBS Web UI now. Clearly not a good solution but it appears the exit status doesn't impact on gathering the data.
  5. ozdjh

    smartctl fails to get SMART values

    Hi It produces 600 lines of JSON formatted SMART information and an exit status of 4. Do you want to see the JSON output?
  6. ozdjh

    smartctl fails to get SMART values

    Hi I've just run up PBS 2.2-1 in our lab to test it out. Installation was onto an existing Debian 11 server. Everything is going pretty well except one issue I'm seeing is that it can't run smartctl on the drives. Trying to view the SMART values for a disk shows the attached (failed -...
  7. ozdjh

    Considering Proxmox for hosting - What are your thoughts

    We run a similar node configuration other than the networking. We run 40gbps ethernet for the Ceph traffic, but I'd run 100gbps these days (we built this cluster a couple of years ago). Moving Ceph away from 10GbE makes a big different. We did some benchmarking and shared the results here ...
  8. ozdjh

    Feature request: maintenance mode and/or DRS

    This is one obvious feature that's missing from PVE (we love PVE by the way, just wish it had this). We do not want to start maintenance on a node until we know it's not running any workloads. We've ended up writing our own scripts to manage this but it's a clunky solution. All we want is a...
  9. ozdjh

    [SOLVED] PVE Firewall not filtering anything

    Hey d1_sen. I would also have assumed that a firewall restart would have fixed this. And, I was under the impression that PVE 7 set the correct forwarding any time a firewall rule was pushed. Have you tried editing the rules of a VM to see if that enabled net.bridge.bridge-nf-call-iptables ?
  10. ozdjh

    [SOLVED] PVE Firewall not filtering anything

    To see if it's the same problem check the output of sysctl net.bridge.bridge-nf-call-iptables If that's set to 0 then the VM traffic isn't being passed through iptables. Setting that to 1 will fix things.
  11. ozdjh

    [SOLVED] Can you use Ceph as a storage for VM's disk image files?

    Good to hear. The only issue is that those other pools won't be used by anything so that's a little bit of wasted space.
  12. ozdjh

    [SOLVED] Can you use Ceph as a storage for VM's disk image files?

    * Undo what you've done to setup your CephFS. * In the GUI Node level Ceph menu create a pool with the settings you want * In the GUI Datacenter level Storage menu Add a new RBD storage using the pool David ...
  13. ozdjh

    [SOLVED] Proxmox cluster slow to shutdown.

    We're still forcing a "swapoff" before we reboot during an upgrade.
  14. ozdjh

    [SOLVED] PVE Firewall not filtering anything

    The comments on this thread regarding a ceph upgrade without a reboot match our experience. We upgraded to 6.3 then after testing everything was OK we upgraded Ceph to Octopus. We rebooted during the PVE upgrade but did not reboot after the Ceph upgrade. That matches what others have said...
  15. ozdjh

    [SOLVED] PVE Firewall not filtering anything

    @t.lamprecht, this is marked as solved but I don't know what the solution is. I am seeing this same problem. We have a prod cluster and a dev cluster that we upgraded to 6.3 & Octopus about 5 weeks ago. We recently noticed spoofed traffic coming out of a prod cluster node and (with the help...
  16. ozdjh

    How to delay the HA procedure with 2 nodes

    It would be good if this was a configurable setting. The hard coded value may not be appropriate for all environments. We'd like to run a non-default value but don't feel that editing that script on each node every time we upgrade is a sensible approach. Thanks David
  17. ozdjh

    [SOLVED] Proxmox cluster slow to shutdown.

    As I mentioned in my last post If you want to open a ticket and reference ours then let them know it was ticket 9647352
  18. ozdjh

    Alternatives to modules garden ? Anything?

    The criticism mentioned in the initial post of this thread was the woeful technical support offered by your company. That criticism is totally valid.
  19. ozdjh

    Alternatives to modules garden ? Anything?

    Yes, it's a mess and a really bad feature release. I raised a ticket with whmcs and they said in a reply that they are considering letting us disable this feature in a future version. They don't seam to understand (or care) how bad this feature or how much impact it has for people using...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!