Search results

  1. K

    Tag colour and order not retaining override colour or order

    I have also seen this behaviour recently. I then noticed that changing the sidebar view mode results in the expected tag colours being applied.
  2. K

    Proxmox user base seems rather thin?

    That's good news. Thanks for the updates on your progress. Can you post images of the suspect card?
  3. K

    Proxmox user base seems rather thin?

    As a newcomer to Linux and Linux-based virtualisation, your time might be better spent putting this network card issue aside for a while and sticking with PVE8. I have a similar network card in my setup, so was interested in your report. It's a common card so should surface again soon as more...
  4. K

    Spam Score GB_OBFU_PHONE

    I do something like this: # grep -r -e 'describe.*GB_OBFU_PHONE' /usr/share/spamassassin* /var/lib/spamassassin /var/lib/spamassassin/4.000001/kam_sa-channels_mcgrail_com/KAM.cf: describe GB_OBFU_PHONE Obfuscated phone number
  5. K

    PVE 9.0 CPU Scaling Governor not working anymore

    @Philebos I'm not familiar with the script you mention or what it's function is. I am currently setting governor by just passing kernel boot parm `cpufreq.default_governor=ondemand`. Are you saying (having confirmed the governor has been set successfully) that for a given governor, you see...
  6. K

    [SOLVED] Coral TPU on Proxmox 9

    The pertinent parts of lxc.conf for coral access and igpu encoder. features: keyctl=1,nesting=1 unprivileged: 1 dev0: /dev/dri/renderD128,gid=106 lxc.cgroup2.devices.allow: c 189:* rwm lxc.mount.entry: /dev/bus/usb dev/bus/usb none bind,optional,create=dir 106 is the render group in my debian...
  7. K

    [SOLVED] Coral TPU on Proxmox 9

    I'm using it in an unprivileged lxc running frigate (docker). I did not find it necessary to install a driver on the PVE host. The mini-pcie versions though do require the driver on the host (for lxc use).
  8. K

    [SOLVED] Coral TPU on Proxmox 9

    How are you using the Coral USB TPU? I'm curious what benefit you get from installing the driver on the PVE host.
  9. K

    e1000e eno1: Detected Hardware Unit Hang:

    I experienced problems with the e1000 driver and the onboard NIC on my ancient motherboard. I gave up using it for a VLAN trunk and resorted to the other built-in NIC (Atheros). I eventually disabled both onboard NICs and purchased a secondhand Intel i350 (2 port version). If you are happy with...
  10. K

    Unable to initialize Google Coral USB Accelerator in LXC container (used to work)

    Try, temporarily, changing the lxc mount to: lxc.mount.entry: /dev/bus/usb dev/bus/usb none bind,optional,create=dir
  11. K

    Unable to initialize Google Coral USB Accelerator in LXC container (used to work)

    Does that USB device path hold true for both the uninitialised and initialised tpu? IIRC that would not work for me, which was why I resorted to the more permissive config I posted. However, you say it was working for you before... Also, with your config, you would have needed to change it when...
  12. K

    Unable to initialize Google Coral USB Accelerator in LXC container (used to work)

    I know you said you had it working prior. However, if it's an unprivileged CT, do you still have the necessary device mapping and permissions in the lxc.conf? For instance, I have unprivileged: 1 lxc.cgroup2.devices.allow: c 189:* rwm lxc.mount.entry: /dev/bus/usb dev/bus/usb none...
  13. K

    How to block puny code Domains in EHLO?

    Check that the generated config contains your modifications: Check the /etc/postfix/main.cf directly and/or via postconf.
  14. K

    How to block puny code Domains in EHLO?

    Your template override should be created in /etc/pmg/templates/. I may be wrong, but I don't believe you hash regex lookup file either.
  15. K

    PMG iso/appliance VS LXC (8.2)

    I bet it is down to total memory available. As you say in your earlier post, containerised PMG should need less total memory (real + virtual) than same config running in a VM.
  16. K

    PMG iso/appliance VS LXC (8.2)

    In my PMG container, clamd uses 33% of the 4GB memory assigned to the CT and I have swap disabled. How much swap was available to the VM vs what you've allocated your container? Maybe that's why you managed to get PMG running in the VM. Have you checked swap utilisation in the VM vs swap...
  17. K

    PMG iso/appliance VS LXC (8.2)

    Is clamav running, similarly configured and using similar resources on the vm based instance? I considered temporarily disabling clamav, to get a PMG CT's ram requirement below 2GB, when I was running low on memory for a while.
  18. K

    Webinterface security

    Under Configuration/Spam Detector/Quarantine, you can specify a 'Quarantine Host'. I believe that's what is used to format URLs in quarantine mail.
  19. K

    Change PMG hostname

    If you edit the system hostname (/etc/hostname and /etc/hosts) you could then reapply the PMG templates (`pmgconfig sync --restart 1`). I think that should take care of most things e.g. postfix. I run PMG in a PVE container, so I haven't needed to edit the hostname directly but rather through...
  20. K

    Change PMG hostname

    There's the system hostname as in /etc/hostname and /etc/hosts. Then there's the hostname as presented by postfix (myhostname) and that used in the URL for the spam quarantine. I think both those will get regenerated when you save the relevant parts in PMG web UI.