Is this still the case? From 3 to 4 is a big jump for spam filter. At 3 I'm cutting a lot of valid emails. At 4 a lot of junk is passing by.
The spamassassin rules change a lot. It is not feasible to babysit it all day and increase or decrease their value so that an integer value is...
Sorry guys to enter here to disturb, but that configuration above screams vengeance! It's overly complicated and unnecessary. I'll show an alternative configuration I think is cleaner and makes to easier transition from one ovh server to another. Here it goes:
1. The proxmox server only needs 1...
I tried with
$value = sprintf '%.2g',$value;
in hope that if would remove the trailing '.0' from the value when the value is integer, but no go.
The problem here is semantic: You should not push a float for uptime. it is an integer. I'm logging many hosts in the system table, not only proxmox...
There seems to be an issue with the current ZFS driver.
Receiving on encrypted datasets seems to trigger a null pointer dereference and a lockup that requires hard reset of the node.
https://github.com/openzfs/zfs/issues/11679
My proxmox servers are affected.
It seems there's a data...
after some debugging, I found that the "move" routines in proxmox make use of the refquota parameter in the zfs subvol and not of the size of the disk in the ctx conf file.
if the refquota is "none" as in unlimited space, this is interpreted as 0 quota. the rbd device is created with 0 size the...
Update: I can confirm that subvol-based CT either on zfs-native filesystems or local storage (directory) filesystem (backed by zfs in my case) cannot have their rootfs moved to either ceph rbd or nfs backed storage.
The issue seems to be that the size of the disk is determined as 0. The...
A small update: So far, the only difference I found between the CT that can migrate and those who can't is that the onces that can are stored as "RAW" images, while those who can't are stored as "subvol".
There's also an issue when moving disks from subvol local storage to zfs local storage...
I'm having issues moving CT root fs from local storage (zfs) to ceph rbd.
The problem only occurs on existing CT, created some time ago. At that time I was using proxmox 4.x
The error does not occur on new CT created with proxmox 6.2, using ubuntu 20.x template.
EDIT: It seems the issue is due...
Thank you. I was under the false assumption that you cannot enslave a vlan interface (as was the case some years ago).
I was not able to replicate your setup and confirm it works nicely accross multiple switches for speed and as a failover if a whole switch dies.
Further, connecting the...
Would you share your /etc/network/interfaces configuration ?
If I understand exactly, you managed to bond the two links like this:
Active-Backup - bond0
Backup-Active - bond1
Then, on a switch failure (say the second switch), they become :
Active-Backup - bond0
Active-Backup - bond1
At the...
Hi all,
I once used an Intel Modular Server (MFSYS) in one of my deployments of proxmox (1.5!) and that was a nice little blade setup. That system is now 8 years old (running the latest proxmox 5) but needs replacement.
Is there a similar blade setup to be bought today ? I fail to find a...
Hello,
I'm experiencing a problem caused by disk quota exhaustion on a container.
Simply put, when the quota is reached inside a container, further writes are be blocked, however, the writing process is not terminated but hangs indefinitely.
When this happens, the load average on the physical...
Hi,
I have a few proxmox nodes running in a cluster, with the cluster on a separate network interface.
If a cluster node looses connection to this interface temporarily (like when a cable is disconnected and reconnected), the node will never see the other nodes until reboot.
I tried issuing...
Dietmar. I agree, when working properly.
However, something is leaking memory with LXC containers and they become memory starved. This never occured on the *SAME* containers when they were running under OpenVZ. They would run for excess of 6 months at a time with some of then not being reboot...
I am having stability issues with LXC containers after migration from OpenVZ
What happens is that when the memory is all used in a container, the OOM kicks in and kills processes (see attached example).
If I try to restart a killed process it will usually fail. A reboot of the container is...
Fabian,
you might be interested to know that I've had no issues for some days now. The servers running 4.2.6+lxcfs pve2 are solid (two of them, different hardware). The ones with 4.2.8 + pve2 have been working too.
jinjer.
fabian,
I think that the lxcfs was upgraded, then the containers restarted. I might be wrong tough. Now a couple of servers were downgraded to kernel 4.2.6 with an upgraded lxfs 2.0.0-pve2. There are a few other servers running 4.2.8 with lxcfs 2.0.0-pve2 so I'm waiting now.
I will try to post...
Sorry, the issue seems related to the lockup here: https://forum.proxmox.com/threads/pve-suddunly-stopped-working-all-cts-unrecheable.26458/
trying to reboot a normally working server works. Trying to reboot a server which was locked up (possibly) by lxcfs will fail.
I have now reverted to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.