Search results

  1. S

    Switch off bayes autolearning but keep manual learning

    Hi everybody, from what I understand Bayes is able to learn about ham/spam automatically but this is different from AWL. I, like many others observed that Bayes is giving obvious spam-mails a very low or negative score after running for some time. From what I understand this is due to Bayes not...
  2. S

    Vulnerability in ClamAV

    https://amitschendel.github.io/vulnerabilites/CVE-2024-20328/ Is the "VirusEvent" Feature activated in the PMG? clamd.conf | grep "VirusEvent" finds nothing. Any suggestions or updates for this? THX
  3. S

    [SOLVED] Resize CephFS

    Hello, I changed the pg_num of my BRD-pool and CephFS-Data-Pool with the intent to migrate free space from one to the other. But it seems like this did not have any effect on the available space. px01:~$ sudo ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY...
  4. S

    Ceph configuration: mon_osd_down_out_interval

    How do I change the Ceph configuration of a running cluster? I edited /etc/pve/ceph.conf [mon] mon_osd_adjust_down_out_interval = false mon_osd_down_out_interval = 10 Even after rebooting the node, I still get: root@px01:~# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok...
  5. S

    KVM: free page hinting

    Hello, it would be nice if freeing memory inside a guest-VM would also decrease the memory consumption of the corresponding KVM process on the host. Is the "free page hinting" necessary for this to work? Is this feature enabled by default? If not, how to enable it? Cedric
  6. S

    [SOLVED] TASK ERROR: start failed... got timeout

    Hello, we have problems with recovering from a node outage. The scenario is: 4 nodes a VM on node A (part of a HA group) we cut of power of node A after a while the VM is migrated to node B the Start-Task on the new Node fails (see error), but the Status is running and HA State is started no...
  7. S

    [SOLVED] ClamAV and Avast

    Hello, if I enable Avast Virus Scanner, are then both scanners used? Greetings Cedric
  8. S

    [SOLVED] Delete IP from whitelist in console

    Hello, how can I add a handler for deleting IPs from the whitelist? root@pmg:~$ pmgsh delete /config/whitelist/ip --ip 1.2.3.4 no 'delete' handler for 'config/whitelist/ip' I am also interested in how to reload the whitelist after manually editing /etc/postfix/postscreen_access? Cedric
  9. S

    [SOLVED] Score as float in __SPAM_INFO__

    Hello, how do I get the the spam score as a float in __SPAM_INFO__? It would be sufficient to write the float value in the mail header. How to do that? To achieve the same thing for the syslog, I slightly modified /usr/share/perl5/PMG/RuleDB/Spam.pm: sub analyze_spam { [...] my...
  10. S

    fencing actions

    Hi, how can I edit the behavior of the fencing process? By default, a mail with subject "FENCE: Try to fence node '<node>' is sent. I would like to add some custom commands. Cedric
  11. S

    Security of exposing Ceph Monitors

    The Ceph Monitors are supposed to be exposed in the public network, so that clients can reach them in order to mount CephFS by using the kernel driver or FUSE. What harm could a compromised client do to the Cluster by exploiting the connection to Ceph Monitors? Are the Monitors secure enough...
  12. S

    RBDs in "cephfs_data"-Pool

    We wonder if we could just create a RBD-Storage using the "cephfs_data"-Pool. We would like to make the setup as flexible as possible, because we don't know yet how to split out storage capacity to RBDs and CephFS. Are there any downsides? And how to decide on the relation of CephFS data/metadata?
  13. S

    changing min_size automatically

    Hello, we would like to build a 4 node Proxmox/Ceph-Cluster that is able to recover from 2 nodes failing at once. To prevent data loss in such a case, we have to choose a min_size of 3. But when 2 nodes fail, there are only 2 nodes left. That is why we came up with the idea of reducing the...
  14. S

    qcow2 on CephFS versus RBD

    Hello, we wonder which of the following two setups might be the better choice for using Proxmox VE with Ceph: usual: RBD less usual: qcow2 on CephFS The second setup was mentioned in Thread. What pros and cons do you come up with? Cedric