Recent content by tarball

  1. T

    Does pmg support this use case ?

    - Added domain1.com to Configuration->Mail Proxy->Relay Domains list - SSH into the pmg node as root mkdir -p /etc/pmg/templates/ ; cp /var/lib/pmg/templates/main.cf.in /etc/pmg/templates/ - Added the following to /etc/pmg/templates/main.cf.in relay_domains = hash:/etc/pmg/domains...
  2. T

    Does pmg support this use case ?

    Everything works fine now -- Might want to add this to the GUI imho, it's only a few more lists and text files to manage. It's functionality that could become a great feature imho. Thanks again @Stoiko Ivanov
  3. T

    Does pmg support this use case ?

    Ah thank you ! I was expecting this to be a more common use-case. I'll give it a try.
  4. T

    Does pmg support this use case ?

    Hello ! I own a domain that I would like to filter the mail for. I don't want to *host* the mail for the domain I own but rather want the mail for <user1>@mydomain.com to be filtered via PMG and then forwarded to <user2>@gmail|isp.com. So in other words, I don't want to store e-mail for...
  5. T

    pvestatd not reaping properly ? process table full -- system slowdown

    I think a ZFS pool been defined at the (non-HA) cluster level but this specific host (ovz3) does not have a ZFS pool; doesn't even have the various ZFS tools I think. I just checked and a few older PVEs that were upgraded to the latest or at least the ZFS-enabled PVE are exhibiting the same...
  6. T

    pvestatd not reaping properly ? process table full -- system slowdown

    Hi Dietmar, pvesm status zfs error: open3: exec of zpool list -o name -H failed at /usr/share/perl5/PVE/Tools.pm line 328 zfs error: open3: exec of zpool list -o name -H failed at /usr/share/perl5/PVE/Tools.pm line 328 local dir 1 1031992064 272851220 706712044 28.35%...
  7. T

    pvestatd not reaping properly ? process table full -- system slowdown

    Hi it looks like there's a potential issue with pvestatd ? On one of our systems we noticed that the process table was full (62K+ processes being defunct pvestatd). The systems slows down to a crawl at that point. After a stop/start of the daemon; everything's fine again. Running...
  8. T

    repeated kernel panics

    I updated to the latest -- I'll update the thread: pveversion -v proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve) pve-manager: 3.3-5 (running version: 3.3-5/bfebec03) pve-kernel-2.6.32-32-pve: 2.6.32-136 pve-kernel-2.6.32-29-pve: 2.6.32-126 pve-kernel-2.6.32-34-pve: 2.6.32-140...
  9. T

    repeated kernel panics

    13:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09) Subsystem: Oracle/SUN Sun StorageTek SAS RAID HBA, Internal Flags: bus master, fast devsel, latency 0, IRQ 26 Memory at fae00000 (64-bit, non-prefetchable) [size=2M] Expansion ROM at fad80000 [disabled] [size=512K] Capabilities: [98]...
  10. T

    repeated kernel panics

    Hi, We have repeated kernel panics (usually during LVM snapshot backups); latest kernel. File systems (ext3) come up clean (forced a couple fsck()); hardware raid. Is this something someone else is seeing as well ? thanks!
  11. T

    kernel issue: leaked beancounter

    Hello, We're seeing the following on a fairly busy proxmox HV (same applies to the previous kernel). Any idea on what's going on here ? Aug 13 00:27:22 ovz4 kernel: Ub 179 helds 924 in kmemsize on put Aug 13 00:27:22 ovz4 kernel: UB: leaked beancounter 179 (ffff8802bc05e140) Aug 13 00:27:22...
  12. T

    scripting networking part when creating new containers

    By 'delegation' i mean have the networking-related bits being handled by a script when the GUI brings up the Network-DNS panes. Feed the output of a script (IP address) instead of expecting the user to add the IP. This allows people without IT knowledge/permissions to setup a CT with the...
  13. T

    scripting networking part when creating new containers

    Hi, Is there a way to delegate to a script the networking part of a CT addition through the proxmox GUI ? thanks
  14. T

    extremely slow LSI 8265-8i system

    I just recreated 2 arrays raid-0; 4 heads per array - every possible cache turned on... hdparm -Tt /dev/sda /dev/sda: Timing cached reads: 6862 MB in 2.00 seconds = 3433.56 MB/sec Timing buffered disk reads: 12 MB in 11.14 seconds = 1.08 MB/sec root@royale:~# hdparm -Tt /dev/sdb...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!